Test Report: Docker_Linux_crio_arm64 22101

                    
                      e65f928d8ebd0537e3fd5f2753f43f3d5796d0a1:2025-12-12:42734
                    
                

Test fail (40/316)

Order failed test Duration
38 TestAddons/serial/Volcano 0.33
44 TestAddons/parallel/Registry 16.03
45 TestAddons/parallel/RegistryCreds 0.48
46 TestAddons/parallel/Ingress 143.45
47 TestAddons/parallel/InspektorGadget 6.26
48 TestAddons/parallel/MetricsServer 5.44
50 TestAddons/parallel/CSI 33.27
51 TestAddons/parallel/Headlamp 3.35
52 TestAddons/parallel/CloudSpanner 5.28
53 TestAddons/parallel/LocalPath 9.45
54 TestAddons/parallel/NvidiaDevicePlugin 6.32
55 TestAddons/parallel/Yakd 6.27
171 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy 503.21
173 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart 369.46
175 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods 2.53
185 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd 2.35
186 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly 2.45
187 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig 734.22
188 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth 2.12
191 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService 0.06
194 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd 1.72
197 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd 3.21
201 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect 2.44
203 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim 241.64
213 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels 1.56
219 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel 0.53
222 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup 0.13
223 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect 120.74
228 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp 0.06
229 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List 0.26
230 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput 0.27
231 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS 0.26
232 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format 0.26
233 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL 0.27
237 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port 2.18
293 TestJSONOutput/pause/Command 1.87
299 TestJSONOutput/unpause/Command 2.33
358 TestKubernetesUpgrade 782.31
384 TestPause/serial/Pause 8.33
481 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 7200.128
x
+
TestAddons/serial/Volcano (0.33s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:852: skipping: crio not supported
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-199484 addons disable volcano --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-199484 addons disable volcano --alsologtostderr -v=1: exit status 11 (329.700587ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 00:13:32.865245  497870 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:13:32.867931  497870 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:13:32.867949  497870 out.go:374] Setting ErrFile to fd 2...
	I1212 00:13:32.867955  497870 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:13:32.868250  497870 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-487723/.minikube/bin
	I1212 00:13:32.868590  497870 mustload.go:66] Loading cluster: addons-199484
	I1212 00:13:32.869009  497870 config.go:182] Loaded profile config "addons-199484": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:13:32.869021  497870 addons.go:622] checking whether the cluster is paused
	I1212 00:13:32.869130  497870 config.go:182] Loaded profile config "addons-199484": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:13:32.869139  497870 host.go:66] Checking if "addons-199484" exists ...
	I1212 00:13:32.869641  497870 cli_runner.go:164] Run: docker container inspect addons-199484 --format={{.State.Status}}
	I1212 00:13:32.895978  497870 ssh_runner.go:195] Run: systemctl --version
	I1212 00:13:32.896042  497870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199484
	I1212 00:13:32.914499  497870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/addons-199484/id_rsa Username:docker}
	I1212 00:13:33.033879  497870 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 00:13:33.034052  497870 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 00:13:33.067729  497870 cri.go:89] found id: "0b887da02a72ca5e9db931fa0b78ed0c10baf6d179621e8b3b144a117a55809d"
	I1212 00:13:33.067752  497870 cri.go:89] found id: "0104cc6b5dd424933a9226ae25ad89956edefad068d133eb18cfe430e71b64ac"
	I1212 00:13:33.067758  497870 cri.go:89] found id: "c47d48a718439532bd951ca0cdfae6827283df3ae31939423aeb7181a556e753"
	I1212 00:13:33.067763  497870 cri.go:89] found id: "5bff244d59411c9a273c0a7b1039b628bff021a4fcc767ce6393c52e86beb8eb"
	I1212 00:13:33.067766  497870 cri.go:89] found id: "ef710450ec222b3bb9dce827f650fe3b3d671f8886111935c0833ebaf845350b"
	I1212 00:13:33.067771  497870 cri.go:89] found id: "a10cfd4bcf4a619b3dd7e7bec66661fc97cca6bb81d6d42138f98dd802da82b5"
	I1212 00:13:33.067775  497870 cri.go:89] found id: "7adec1475ef10259911007f4aed32a65a52312ba2c8d26c991fc9f115e2afc7e"
	I1212 00:13:33.067779  497870 cri.go:89] found id: "ebe4e7c9ca0fc77879706813f9313a141bda416c1327e75bfc10b883dde9afe7"
	I1212 00:13:33.067783  497870 cri.go:89] found id: "7859d680677c9320a6b97dc99f20c809caf1cf0a0e02a9680dd377acc63b6976"
	I1212 00:13:33.067790  497870 cri.go:89] found id: "e47e9aabb94d9f6577691b06bc3594ad26b704954d37c8f5750e1b8ae813479b"
	I1212 00:13:33.067800  497870 cri.go:89] found id: "afc99c45439ad0e7765f5c4c99793d9a7abbbc0fbe4ee7679500e1d0d406f9cc"
	I1212 00:13:33.067807  497870 cri.go:89] found id: "e021d26e2771d4b836e3c2ff9a3c8340d8a7f191d6258abe1cbcbd9602298f76"
	I1212 00:13:33.067811  497870 cri.go:89] found id: "9da22fcbd3de39035ac1a03e1a791e6c51948745f63a0cd5880aee271a0b93c4"
	I1212 00:13:33.067814  497870 cri.go:89] found id: "12d12a2561d73ca125841a672690c99e84b5fb54f11b8b04257c2e1ab7f1a247"
	I1212 00:13:33.067817  497870 cri.go:89] found id: "549fae89400cf4efbe803ae4aa702097163dfc5e9a131def0ae6bbecb4c0601e"
	I1212 00:13:33.067822  497870 cri.go:89] found id: "e4aaf8d36273d3acde491a8cd14406fcfbfeebc26d9e26894b4170e98f011d9a"
	I1212 00:13:33.067826  497870 cri.go:89] found id: "be3ca683626781d8cf4bacd424bf231f28a131d46b225751ad657dc8a00878f1"
	I1212 00:13:33.067829  497870 cri.go:89] found id: "e251865f884a70fae76b65618090dc9e6abcf3315601089443dc5fb1bd026fb1"
	I1212 00:13:33.067833  497870 cri.go:89] found id: "f4dd998c607c5f8351f4c10ea768def06e8e2defafffafca5fe3876d98d9b123"
	I1212 00:13:33.067836  497870 cri.go:89] found id: "10211afe59632799435b4008dd96430e1edb4a1cc399809c32273577dfd7cd61"
	I1212 00:13:33.067841  497870 cri.go:89] found id: "7e478b538e97db66e0de68ed3ade2ff6d3d2420a89b4bad65e8158d500e16aae"
	I1212 00:13:33.067845  497870 cri.go:89] found id: "810bdb88faff8bb6f2eca85e10545aa7edde43a7452f29a88bc8f3d2c032b8df"
	I1212 00:13:33.067848  497870 cri.go:89] found id: "8f971c589eb18130d181fe2c7aa31da3304b9d3a3c2f5c74aa810a8426636a2a"
	I1212 00:13:33.067851  497870 cri.go:89] found id: ""
	I1212 00:13:33.067910  497870 ssh_runner.go:195] Run: sudo runc list -f json
	I1212 00:13:33.097768  497870 out.go:203] 
	W1212 00:13:33.100896  497870 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T00:13:33Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T00:13:33Z" level=error msg="open /run/runc: no such file or directory"
	
	W1212 00:13:33.100926  497870 out.go:285] * 
	* 
	W1212 00:13:33.107549  497870 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 00:13:33.110491  497870 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable volcano addon: args "out/minikube-linux-arm64 -p addons-199484 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.33s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.03s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 7.966565ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-6b586f9694-d69pq" [c6dc399c-e268-46e8-a3ea-8929470b439b] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003240812s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-rj8pk" [c6c691b6-a31f-4583-bd46-b7cbd62968ed] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004274681s
addons_test.go:394: (dbg) Run:  kubectl --context addons-199484 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-199484 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-199484 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.409504724s)
addons_test.go:413: (dbg) Run:  out/minikube-linux-arm64 -p addons-199484 ip
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-199484 addons disable registry --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-199484 addons disable registry --alsologtostderr -v=1: exit status 11 (344.173888ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 00:13:59.207534  498789 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:13:59.209608  498789 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:13:59.209661  498789 out.go:374] Setting ErrFile to fd 2...
	I1212 00:13:59.209682  498789 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:13:59.210223  498789 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-487723/.minikube/bin
	I1212 00:13:59.210552  498789 mustload.go:66] Loading cluster: addons-199484
	I1212 00:13:59.211105  498789 config.go:182] Loaded profile config "addons-199484": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:13:59.211154  498789 addons.go:622] checking whether the cluster is paused
	I1212 00:13:59.211331  498789 config.go:182] Loaded profile config "addons-199484": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:13:59.211367  498789 host.go:66] Checking if "addons-199484" exists ...
	I1212 00:13:59.211984  498789 cli_runner.go:164] Run: docker container inspect addons-199484 --format={{.State.Status}}
	I1212 00:13:59.237674  498789 ssh_runner.go:195] Run: systemctl --version
	I1212 00:13:59.237742  498789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199484
	I1212 00:13:59.257426  498789 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/addons-199484/id_rsa Username:docker}
	I1212 00:13:59.377723  498789 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 00:13:59.377798  498789 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 00:13:59.436261  498789 cri.go:89] found id: "0b887da02a72ca5e9db931fa0b78ed0c10baf6d179621e8b3b144a117a55809d"
	I1212 00:13:59.436289  498789 cri.go:89] found id: "0104cc6b5dd424933a9226ae25ad89956edefad068d133eb18cfe430e71b64ac"
	I1212 00:13:59.436294  498789 cri.go:89] found id: "c47d48a718439532bd951ca0cdfae6827283df3ae31939423aeb7181a556e753"
	I1212 00:13:59.436298  498789 cri.go:89] found id: "5bff244d59411c9a273c0a7b1039b628bff021a4fcc767ce6393c52e86beb8eb"
	I1212 00:13:59.436320  498789 cri.go:89] found id: "ef710450ec222b3bb9dce827f650fe3b3d671f8886111935c0833ebaf845350b"
	I1212 00:13:59.436324  498789 cri.go:89] found id: "a10cfd4bcf4a619b3dd7e7bec66661fc97cca6bb81d6d42138f98dd802da82b5"
	I1212 00:13:59.436327  498789 cri.go:89] found id: "7adec1475ef10259911007f4aed32a65a52312ba2c8d26c991fc9f115e2afc7e"
	I1212 00:13:59.436330  498789 cri.go:89] found id: "ebe4e7c9ca0fc77879706813f9313a141bda416c1327e75bfc10b883dde9afe7"
	I1212 00:13:59.436334  498789 cri.go:89] found id: "7859d680677c9320a6b97dc99f20c809caf1cf0a0e02a9680dd377acc63b6976"
	I1212 00:13:59.436339  498789 cri.go:89] found id: "e47e9aabb94d9f6577691b06bc3594ad26b704954d37c8f5750e1b8ae813479b"
	I1212 00:13:59.436342  498789 cri.go:89] found id: "afc99c45439ad0e7765f5c4c99793d9a7abbbc0fbe4ee7679500e1d0d406f9cc"
	I1212 00:13:59.436345  498789 cri.go:89] found id: "e021d26e2771d4b836e3c2ff9a3c8340d8a7f191d6258abe1cbcbd9602298f76"
	I1212 00:13:59.436349  498789 cri.go:89] found id: "9da22fcbd3de39035ac1a03e1a791e6c51948745f63a0cd5880aee271a0b93c4"
	I1212 00:13:59.436351  498789 cri.go:89] found id: "12d12a2561d73ca125841a672690c99e84b5fb54f11b8b04257c2e1ab7f1a247"
	I1212 00:13:59.436354  498789 cri.go:89] found id: "549fae89400cf4efbe803ae4aa702097163dfc5e9a131def0ae6bbecb4c0601e"
	I1212 00:13:59.436359  498789 cri.go:89] found id: "e4aaf8d36273d3acde491a8cd14406fcfbfeebc26d9e26894b4170e98f011d9a"
	I1212 00:13:59.436362  498789 cri.go:89] found id: "be3ca683626781d8cf4bacd424bf231f28a131d46b225751ad657dc8a00878f1"
	I1212 00:13:59.436365  498789 cri.go:89] found id: "e251865f884a70fae76b65618090dc9e6abcf3315601089443dc5fb1bd026fb1"
	I1212 00:13:59.436368  498789 cri.go:89] found id: "f4dd998c607c5f8351f4c10ea768def06e8e2defafffafca5fe3876d98d9b123"
	I1212 00:13:59.436371  498789 cri.go:89] found id: "10211afe59632799435b4008dd96430e1edb4a1cc399809c32273577dfd7cd61"
	I1212 00:13:59.436376  498789 cri.go:89] found id: "7e478b538e97db66e0de68ed3ade2ff6d3d2420a89b4bad65e8158d500e16aae"
	I1212 00:13:59.436379  498789 cri.go:89] found id: "810bdb88faff8bb6f2eca85e10545aa7edde43a7452f29a88bc8f3d2c032b8df"
	I1212 00:13:59.436382  498789 cri.go:89] found id: "8f971c589eb18130d181fe2c7aa31da3304b9d3a3c2f5c74aa810a8426636a2a"
	I1212 00:13:59.436385  498789 cri.go:89] found id: ""
	I1212 00:13:59.436443  498789 ssh_runner.go:195] Run: sudo runc list -f json
	I1212 00:13:59.452834  498789 out.go:203] 
	W1212 00:13:59.455833  498789 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T00:13:59Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T00:13:59Z" level=error msg="open /run/runc: no such file or directory"
	
	W1212 00:13:59.455855  498789 out.go:285] * 
	* 
	W1212 00:13:59.462473  498789 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 00:13:59.465574  498789 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable registry addon: args "out/minikube-linux-arm64 -p addons-199484 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (16.03s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.48s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 3.286665ms
addons_test.go:327: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-199484
addons_test.go:334: (dbg) Run:  kubectl --context addons-199484 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-199484 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-199484 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (265.456504ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 00:14:48.630342  500314 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:14:48.632232  500314 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:14:48.632254  500314 out.go:374] Setting ErrFile to fd 2...
	I1212 00:14:48.632260  500314 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:14:48.632615  500314 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-487723/.minikube/bin
	I1212 00:14:48.632964  500314 mustload.go:66] Loading cluster: addons-199484
	I1212 00:14:48.633406  500314 config.go:182] Loaded profile config "addons-199484": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:14:48.633513  500314 addons.go:622] checking whether the cluster is paused
	I1212 00:14:48.633665  500314 config.go:182] Loaded profile config "addons-199484": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:14:48.633684  500314 host.go:66] Checking if "addons-199484" exists ...
	I1212 00:14:48.634261  500314 cli_runner.go:164] Run: docker container inspect addons-199484 --format={{.State.Status}}
	I1212 00:14:48.652064  500314 ssh_runner.go:195] Run: systemctl --version
	I1212 00:14:48.652121  500314 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199484
	I1212 00:14:48.670377  500314 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/addons-199484/id_rsa Username:docker}
	I1212 00:14:48.779948  500314 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 00:14:48.780043  500314 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 00:14:48.811155  500314 cri.go:89] found id: "0b887da02a72ca5e9db931fa0b78ed0c10baf6d179621e8b3b144a117a55809d"
	I1212 00:14:48.811179  500314 cri.go:89] found id: "0104cc6b5dd424933a9226ae25ad89956edefad068d133eb18cfe430e71b64ac"
	I1212 00:14:48.811184  500314 cri.go:89] found id: "c47d48a718439532bd951ca0cdfae6827283df3ae31939423aeb7181a556e753"
	I1212 00:14:48.811189  500314 cri.go:89] found id: "5bff244d59411c9a273c0a7b1039b628bff021a4fcc767ce6393c52e86beb8eb"
	I1212 00:14:48.811192  500314 cri.go:89] found id: "ef710450ec222b3bb9dce827f650fe3b3d671f8886111935c0833ebaf845350b"
	I1212 00:14:48.811196  500314 cri.go:89] found id: "a10cfd4bcf4a619b3dd7e7bec66661fc97cca6bb81d6d42138f98dd802da82b5"
	I1212 00:14:48.811200  500314 cri.go:89] found id: "7adec1475ef10259911007f4aed32a65a52312ba2c8d26c991fc9f115e2afc7e"
	I1212 00:14:48.811203  500314 cri.go:89] found id: "ebe4e7c9ca0fc77879706813f9313a141bda416c1327e75bfc10b883dde9afe7"
	I1212 00:14:48.811207  500314 cri.go:89] found id: "7859d680677c9320a6b97dc99f20c809caf1cf0a0e02a9680dd377acc63b6976"
	I1212 00:14:48.811214  500314 cri.go:89] found id: "e47e9aabb94d9f6577691b06bc3594ad26b704954d37c8f5750e1b8ae813479b"
	I1212 00:14:48.811218  500314 cri.go:89] found id: "afc99c45439ad0e7765f5c4c99793d9a7abbbc0fbe4ee7679500e1d0d406f9cc"
	I1212 00:14:48.811221  500314 cri.go:89] found id: "e021d26e2771d4b836e3c2ff9a3c8340d8a7f191d6258abe1cbcbd9602298f76"
	I1212 00:14:48.811225  500314 cri.go:89] found id: "9da22fcbd3de39035ac1a03e1a791e6c51948745f63a0cd5880aee271a0b93c4"
	I1212 00:14:48.811228  500314 cri.go:89] found id: "12d12a2561d73ca125841a672690c99e84b5fb54f11b8b04257c2e1ab7f1a247"
	I1212 00:14:48.811231  500314 cri.go:89] found id: "549fae89400cf4efbe803ae4aa702097163dfc5e9a131def0ae6bbecb4c0601e"
	I1212 00:14:48.811240  500314 cri.go:89] found id: "e4aaf8d36273d3acde491a8cd14406fcfbfeebc26d9e26894b4170e98f011d9a"
	I1212 00:14:48.811248  500314 cri.go:89] found id: "be3ca683626781d8cf4bacd424bf231f28a131d46b225751ad657dc8a00878f1"
	I1212 00:14:48.811252  500314 cri.go:89] found id: "e251865f884a70fae76b65618090dc9e6abcf3315601089443dc5fb1bd026fb1"
	I1212 00:14:48.811256  500314 cri.go:89] found id: "f4dd998c607c5f8351f4c10ea768def06e8e2defafffafca5fe3876d98d9b123"
	I1212 00:14:48.811259  500314 cri.go:89] found id: "10211afe59632799435b4008dd96430e1edb4a1cc399809c32273577dfd7cd61"
	I1212 00:14:48.811263  500314 cri.go:89] found id: "7e478b538e97db66e0de68ed3ade2ff6d3d2420a89b4bad65e8158d500e16aae"
	I1212 00:14:48.811267  500314 cri.go:89] found id: "810bdb88faff8bb6f2eca85e10545aa7edde43a7452f29a88bc8f3d2c032b8df"
	I1212 00:14:48.811270  500314 cri.go:89] found id: "8f971c589eb18130d181fe2c7aa31da3304b9d3a3c2f5c74aa810a8426636a2a"
	I1212 00:14:48.811274  500314 cri.go:89] found id: ""
	I1212 00:14:48.811335  500314 ssh_runner.go:195] Run: sudo runc list -f json
	I1212 00:14:48.828458  500314 out.go:203] 
	W1212 00:14:48.831375  500314 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T00:14:48Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T00:14:48Z" level=error msg="open /run/runc: no such file or directory"
	
	W1212 00:14:48.831400  500314 out.go:285] * 
	* 
	W1212 00:14:48.837898  500314 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 00:14:48.840763  500314 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable registry-creds addon: args "out/minikube-linux-arm64 -p addons-199484 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.48s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (143.45s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-199484 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-199484 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-199484 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [c1bd4a62-e2cf-4f01-8925-2624f3766172] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [c1bd4a62-e2cf-4f01-8925-2624f3766172] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003651711s
I1212 00:14:19.835750  490954 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-linux-arm64 -p addons-199484 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:266: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-199484 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.836730793s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:282: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:290: (dbg) Run:  kubectl --context addons-199484 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run:  out/minikube-linux-arm64 -p addons-199484 ip
addons_test.go:301: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect addons-199484
helpers_test.go:244: (dbg) docker inspect addons-199484:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ea606c0010f1e62e2edcf855d824c25aff7de69d48b5f85a9b25920ed7d0dac4",
	        "Created": "2025-12-12T00:11:12.261776666Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 492348,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T00:11:12.329565428Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/ea606c0010f1e62e2edcf855d824c25aff7de69d48b5f85a9b25920ed7d0dac4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ea606c0010f1e62e2edcf855d824c25aff7de69d48b5f85a9b25920ed7d0dac4/hostname",
	        "HostsPath": "/var/lib/docker/containers/ea606c0010f1e62e2edcf855d824c25aff7de69d48b5f85a9b25920ed7d0dac4/hosts",
	        "LogPath": "/var/lib/docker/containers/ea606c0010f1e62e2edcf855d824c25aff7de69d48b5f85a9b25920ed7d0dac4/ea606c0010f1e62e2edcf855d824c25aff7de69d48b5f85a9b25920ed7d0dac4-json.log",
	        "Name": "/addons-199484",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-199484:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-199484",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ea606c0010f1e62e2edcf855d824c25aff7de69d48b5f85a9b25920ed7d0dac4",
	                "LowerDir": "/var/lib/docker/overlay2/d51356596acc355d5a6c092cac7e7a8d08960e1901219805b5786939e96f7976-init/diff:/var/lib/docker/overlay2/312acdcca8c5c90ada236fa0dd866f841348e5b8485928af37d3628cccc20197/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d51356596acc355d5a6c092cac7e7a8d08960e1901219805b5786939e96f7976/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d51356596acc355d5a6c092cac7e7a8d08960e1901219805b5786939e96f7976/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d51356596acc355d5a6c092cac7e7a8d08960e1901219805b5786939e96f7976/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-199484",
	                "Source": "/var/lib/docker/volumes/addons-199484/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-199484",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-199484",
	                "name.minikube.sigs.k8s.io": "addons-199484",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "eae61459bc079810e3bdb36dfba9b7a4ed351e3af9cb3236fdaddc4cf5dfe19d",
	            "SandboxKey": "/var/run/docker/netns/eae61459bc07",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33168"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33169"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33172"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33170"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33171"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-199484": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "8a:98:0a:c9:3f:71",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0ce92482db3562eb95f488fcf02c1e6dbbc66a1d250ac9e97b5672f5fb8af901",
	                    "EndpointID": "bd3db023fc14cbf774f28492983aee2fadd2e8070224b972d2973fc38d9c2ece",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-199484",
	                        "ea606c0010f1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-199484 -n addons-199484
helpers_test.go:253: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p addons-199484 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p addons-199484 logs -n 25: (1.403272765s)
helpers_test.go:261: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-docker-950363                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-950363 │ jenkins │ v1.37.0 │ 12 Dec 25 00:11 UTC │ 12 Dec 25 00:11 UTC │
	│ start   │ --download-only -p binary-mirror-824111 --alsologtostderr --binary-mirror http://127.0.0.1:35743 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-824111   │ jenkins │ v1.37.0 │ 12 Dec 25 00:11 UTC │                     │
	│ delete  │ -p binary-mirror-824111                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-824111   │ jenkins │ v1.37.0 │ 12 Dec 25 00:11 UTC │ 12 Dec 25 00:11 UTC │
	│ addons  │ enable dashboard -p addons-199484                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-199484          │ jenkins │ v1.37.0 │ 12 Dec 25 00:11 UTC │                     │
	│ addons  │ disable dashboard -p addons-199484                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-199484          │ jenkins │ v1.37.0 │ 12 Dec 25 00:11 UTC │                     │
	│ start   │ -p addons-199484 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-199484          │ jenkins │ v1.37.0 │ 12 Dec 25 00:11 UTC │ 12 Dec 25 00:13 UTC │
	│ addons  │ addons-199484 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-199484          │ jenkins │ v1.37.0 │ 12 Dec 25 00:13 UTC │                     │
	│ addons  │ addons-199484 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-199484          │ jenkins │ v1.37.0 │ 12 Dec 25 00:13 UTC │                     │
	│ addons  │ enable headlamp -p addons-199484 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-199484          │ jenkins │ v1.37.0 │ 12 Dec 25 00:13 UTC │                     │
	│ addons  │ addons-199484 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-199484          │ jenkins │ v1.37.0 │ 12 Dec 25 00:13 UTC │                     │
	│ addons  │ addons-199484 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-199484          │ jenkins │ v1.37.0 │ 12 Dec 25 00:13 UTC │                     │
	│ ip      │ addons-199484 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-199484          │ jenkins │ v1.37.0 │ 12 Dec 25 00:13 UTC │ 12 Dec 25 00:13 UTC │
	│ addons  │ addons-199484 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-199484          │ jenkins │ v1.37.0 │ 12 Dec 25 00:13 UTC │                     │
	│ addons  │ addons-199484 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-199484          │ jenkins │ v1.37.0 │ 12 Dec 25 00:13 UTC │                     │
	│ addons  │ addons-199484 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-199484          │ jenkins │ v1.37.0 │ 12 Dec 25 00:14 UTC │                     │
	│ ssh     │ addons-199484 ssh cat /opt/local-path-provisioner/pvc-94ef3571-46f0-4f3c-928f-9c7893519f68_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-199484          │ jenkins │ v1.37.0 │ 12 Dec 25 00:14 UTC │ 12 Dec 25 00:14 UTC │
	│ addons  │ addons-199484 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-199484          │ jenkins │ v1.37.0 │ 12 Dec 25 00:14 UTC │                     │
	│ addons  │ addons-199484 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-199484          │ jenkins │ v1.37.0 │ 12 Dec 25 00:14 UTC │                     │
	│ ssh     │ addons-199484 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-199484          │ jenkins │ v1.37.0 │ 12 Dec 25 00:14 UTC │                     │
	│ addons  │ addons-199484 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-199484          │ jenkins │ v1.37.0 │ 12 Dec 25 00:14 UTC │                     │
	│ addons  │ addons-199484 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-199484          │ jenkins │ v1.37.0 │ 12 Dec 25 00:14 UTC │                     │
	│ addons  │ addons-199484 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-199484          │ jenkins │ v1.37.0 │ 12 Dec 25 00:14 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-199484                                                                                                                                                                                                                                                                                                                                                                                           │ addons-199484          │ jenkins │ v1.37.0 │ 12 Dec 25 00:14 UTC │ 12 Dec 25 00:14 UTC │
	│ addons  │ addons-199484 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-199484          │ jenkins │ v1.37.0 │ 12 Dec 25 00:14 UTC │                     │
	│ ip      │ addons-199484 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-199484          │ jenkins │ v1.37.0 │ 12 Dec 25 00:16 UTC │ 12 Dec 25 00:16 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 00:11:05
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 00:11:05.886863  491960 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:11:05.887048  491960 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:11:05.887078  491960 out.go:374] Setting ErrFile to fd 2...
	I1212 00:11:05.887098  491960 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:11:05.887356  491960 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-487723/.minikube/bin
	I1212 00:11:05.887828  491960 out.go:368] Setting JSON to false
	I1212 00:11:05.888701  491960 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":10411,"bootTime":1765487855,"procs":145,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1212 00:11:05.888800  491960 start.go:143] virtualization:  
	I1212 00:11:05.892192  491960 out.go:179] * [addons-199484] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 00:11:05.895875  491960 out.go:179]   - MINIKUBE_LOCATION=22101
	I1212 00:11:05.895984  491960 notify.go:221] Checking for updates...
	I1212 00:11:05.901632  491960 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:11:05.904616  491960 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22101-487723/kubeconfig
	I1212 00:11:05.907673  491960 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-487723/.minikube
	I1212 00:11:05.910753  491960 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 00:11:05.913692  491960 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:11:05.916847  491960 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 00:11:05.946479  491960 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 00:11:05.946620  491960 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:11:06.019080  491960 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-12 00:11:06.009218883 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 00:11:06.019201  491960 docker.go:319] overlay module found
	I1212 00:11:06.022349  491960 out.go:179] * Using the docker driver based on user configuration
	I1212 00:11:06.025292  491960 start.go:309] selected driver: docker
	I1212 00:11:06.025322  491960 start.go:927] validating driver "docker" against <nil>
	I1212 00:11:06.025338  491960 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:11:06.026110  491960 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:11:06.085034  491960 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-12 00:11:06.075938082 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 00:11:06.085224  491960 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1212 00:11:06.085448  491960 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 00:11:06.088316  491960 out.go:179] * Using Docker driver with root privileges
	I1212 00:11:06.091205  491960 cni.go:84] Creating CNI manager for ""
	I1212 00:11:06.091276  491960 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 00:11:06.091291  491960 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1212 00:11:06.091366  491960 start.go:353] cluster config:
	{Name:addons-199484 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-199484 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1212 00:11:06.094454  491960 out.go:179] * Starting "addons-199484" primary control-plane node in "addons-199484" cluster
	I1212 00:11:06.097192  491960 cache.go:134] Beginning downloading kic base image for docker with crio
	I1212 00:11:06.100068  491960 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1212 00:11:06.102817  491960 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 00:11:06.102863  491960 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22101-487723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1212 00:11:06.102877  491960 cache.go:65] Caching tarball of preloaded images
	I1212 00:11:06.102891  491960 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1212 00:11:06.102958  491960 preload.go:238] Found /home/jenkins/minikube-integration/22101-487723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1212 00:11:06.102967  491960 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1212 00:11:06.103295  491960 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/addons-199484/config.json ...
	I1212 00:11:06.103314  491960 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/addons-199484/config.json: {Name:mkb0180a663286b7d6ac48daf7c76698a2b89094 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:11:06.121696  491960 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1212 00:11:06.121719  491960 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1212 00:11:06.121733  491960 cache.go:243] Successfully downloaded all kic artifacts
	I1212 00:11:06.121763  491960 start.go:360] acquireMachinesLock for addons-199484: {Name:mk0ad7b9808d61c7612549b1b854c58edfb0a661 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 00:11:06.121870  491960 start.go:364] duration metric: took 86.349µs to acquireMachinesLock for "addons-199484"
	I1212 00:11:06.121900  491960 start.go:93] Provisioning new machine with config: &{Name:addons-199484 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-199484 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 00:11:06.121966  491960 start.go:125] createHost starting for "" (driver="docker")
	I1212 00:11:06.125369  491960 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1212 00:11:06.125604  491960 start.go:159] libmachine.API.Create for "addons-199484" (driver="docker")
	I1212 00:11:06.125637  491960 client.go:173] LocalClient.Create starting
	I1212 00:11:06.125757  491960 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca.pem
	I1212 00:11:06.365067  491960 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/cert.pem
	I1212 00:11:07.044334  491960 cli_runner.go:164] Run: docker network inspect addons-199484 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1212 00:11:07.064236  491960 cli_runner.go:211] docker network inspect addons-199484 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1212 00:11:07.064322  491960 network_create.go:284] running [docker network inspect addons-199484] to gather additional debugging logs...
	I1212 00:11:07.064352  491960 cli_runner.go:164] Run: docker network inspect addons-199484
	W1212 00:11:07.084577  491960 cli_runner.go:211] docker network inspect addons-199484 returned with exit code 1
	I1212 00:11:07.084610  491960 network_create.go:287] error running [docker network inspect addons-199484]: docker network inspect addons-199484: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-199484 not found
	I1212 00:11:07.084629  491960 network_create.go:289] output of [docker network inspect addons-199484]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-199484 not found
	
	** /stderr **
	I1212 00:11:07.084721  491960 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 00:11:07.102774  491960 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400197caa0}
	I1212 00:11:07.102825  491960 network_create.go:124] attempt to create docker network addons-199484 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1212 00:11:07.102889  491960 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-199484 addons-199484
	I1212 00:11:07.160631  491960 network_create.go:108] docker network addons-199484 192.168.49.0/24 created
	I1212 00:11:07.160658  491960 kic.go:121] calculated static IP "192.168.49.2" for the "addons-199484" container
	I1212 00:11:07.160733  491960 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1212 00:11:07.176071  491960 cli_runner.go:164] Run: docker volume create addons-199484 --label name.minikube.sigs.k8s.io=addons-199484 --label created_by.minikube.sigs.k8s.io=true
	I1212 00:11:07.193398  491960 oci.go:103] Successfully created a docker volume addons-199484
	I1212 00:11:07.193488  491960 cli_runner.go:164] Run: docker run --rm --name addons-199484-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-199484 --entrypoint /usr/bin/test -v addons-199484:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1212 00:11:08.267693  491960 cli_runner.go:217] Completed: docker run --rm --name addons-199484-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-199484 --entrypoint /usr/bin/test -v addons-199484:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib: (1.074154812s)
	I1212 00:11:08.267724  491960 oci.go:107] Successfully prepared a docker volume addons-199484
	I1212 00:11:08.267769  491960 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 00:11:08.267781  491960 kic.go:194] Starting extracting preloaded images to volume ...
	I1212 00:11:08.267845  491960 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22101-487723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-199484:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1212 00:11:12.193578  491960 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22101-487723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-199484:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (3.925686894s)
	I1212 00:11:12.193611  491960 kic.go:203] duration metric: took 3.925825894s to extract preloaded images to volume ...
	W1212 00:11:12.193760  491960 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1212 00:11:12.193879  491960 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1212 00:11:12.248646  491960 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-199484 --name addons-199484 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-199484 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-199484 --network addons-199484 --ip 192.168.49.2 --volume addons-199484:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1212 00:11:12.538937  491960 cli_runner.go:164] Run: docker container inspect addons-199484 --format={{.State.Running}}
	I1212 00:11:12.565751  491960 cli_runner.go:164] Run: docker container inspect addons-199484 --format={{.State.Status}}
	I1212 00:11:12.589012  491960 cli_runner.go:164] Run: docker exec addons-199484 stat /var/lib/dpkg/alternatives/iptables
	I1212 00:11:12.634367  491960 oci.go:144] the created container "addons-199484" has a running status.
	I1212 00:11:12.634406  491960 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22101-487723/.minikube/machines/addons-199484/id_rsa...
	I1212 00:11:13.430010  491960 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22101-487723/.minikube/machines/addons-199484/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1212 00:11:13.463594  491960 cli_runner.go:164] Run: docker container inspect addons-199484 --format={{.State.Status}}
	I1212 00:11:13.487014  491960 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1212 00:11:13.487039  491960 kic_runner.go:114] Args: [docker exec --privileged addons-199484 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1212 00:11:13.534377  491960 cli_runner.go:164] Run: docker container inspect addons-199484 --format={{.State.Status}}
	I1212 00:11:13.554153  491960 machine.go:94] provisionDockerMachine start ...
	I1212 00:11:13.554266  491960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199484
	I1212 00:11:13.571308  491960 main.go:143] libmachine: Using SSH client type: native
	I1212 00:11:13.571911  491960 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33168 <nil> <nil>}
	I1212 00:11:13.571927  491960 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 00:11:13.730490  491960 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-199484
	
	I1212 00:11:13.730511  491960 ubuntu.go:182] provisioning hostname "addons-199484"
	I1212 00:11:13.730577  491960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199484
	I1212 00:11:13.750265  491960 main.go:143] libmachine: Using SSH client type: native
	I1212 00:11:13.750589  491960 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33168 <nil> <nil>}
	I1212 00:11:13.750601  491960 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-199484 && echo "addons-199484" | sudo tee /etc/hostname
	I1212 00:11:13.925013  491960 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-199484
	
	I1212 00:11:13.925175  491960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199484
	I1212 00:11:13.943461  491960 main.go:143] libmachine: Using SSH client type: native
	I1212 00:11:13.943771  491960 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33168 <nil> <nil>}
	I1212 00:11:13.943792  491960 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-199484' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-199484/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-199484' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 00:11:14.094950  491960 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 00:11:14.094979  491960 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22101-487723/.minikube CaCertPath:/home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22101-487723/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22101-487723/.minikube}
	I1212 00:11:14.095008  491960 ubuntu.go:190] setting up certificates
	I1212 00:11:14.095024  491960 provision.go:84] configureAuth start
	I1212 00:11:14.095091  491960 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-199484
	I1212 00:11:14.111642  491960 provision.go:143] copyHostCerts
	I1212 00:11:14.111814  491960 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22101-487723/.minikube/key.pem (1679 bytes)
	I1212 00:11:14.111942  491960 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22101-487723/.minikube/ca.pem (1078 bytes)
	I1212 00:11:14.112012  491960 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22101-487723/.minikube/cert.pem (1123 bytes)
	I1212 00:11:14.112066  491960 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22101-487723/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca-key.pem org=jenkins.addons-199484 san=[127.0.0.1 192.168.49.2 addons-199484 localhost minikube]
	I1212 00:11:14.393430  491960 provision.go:177] copyRemoteCerts
	I1212 00:11:14.393498  491960 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 00:11:14.393541  491960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199484
	I1212 00:11:14.410975  491960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/addons-199484/id_rsa Username:docker}
	I1212 00:11:14.514362  491960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 00:11:14.531389  491960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1212 00:11:14.548878  491960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 00:11:14.565677  491960 provision.go:87] duration metric: took 470.627362ms to configureAuth
	I1212 00:11:14.565703  491960 ubuntu.go:206] setting minikube options for container-runtime
	I1212 00:11:14.565887  491960 config.go:182] Loaded profile config "addons-199484": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:11:14.565988  491960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199484
	I1212 00:11:14.583880  491960 main.go:143] libmachine: Using SSH client type: native
	I1212 00:11:14.584191  491960 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33168 <nil> <nil>}
	I1212 00:11:14.584214  491960 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 00:11:15.064014  491960 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 00:11:15.064101  491960 machine.go:97] duration metric: took 1.509922315s to provisionDockerMachine
	I1212 00:11:15.064127  491960 client.go:176] duration metric: took 8.93847956s to LocalClient.Create
	I1212 00:11:15.064180  491960 start.go:167] duration metric: took 8.938576674s to libmachine.API.Create "addons-199484"
	I1212 00:11:15.064206  491960 start.go:293] postStartSetup for "addons-199484" (driver="docker")
	I1212 00:11:15.064243  491960 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 00:11:15.064359  491960 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 00:11:15.064428  491960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199484
	I1212 00:11:15.082647  491960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/addons-199484/id_rsa Username:docker}
	I1212 00:11:15.190997  491960 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 00:11:15.194363  491960 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 00:11:15.194395  491960 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 00:11:15.194407  491960 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-487723/.minikube/addons for local assets ...
	I1212 00:11:15.194480  491960 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-487723/.minikube/files for local assets ...
	I1212 00:11:15.194508  491960 start.go:296] duration metric: took 130.286128ms for postStartSetup
	I1212 00:11:15.194881  491960 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-199484
	I1212 00:11:15.211523  491960 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/addons-199484/config.json ...
	I1212 00:11:15.211814  491960 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 00:11:15.211869  491960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199484
	I1212 00:11:15.228607  491960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/addons-199484/id_rsa Username:docker}
	I1212 00:11:15.327491  491960 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 00:11:15.331991  491960 start.go:128] duration metric: took 9.210009398s to createHost
	I1212 00:11:15.332016  491960 start.go:83] releasing machines lock for "addons-199484", held for 9.210131832s
	I1212 00:11:15.332086  491960 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-199484
	I1212 00:11:15.348595  491960 ssh_runner.go:195] Run: cat /version.json
	I1212 00:11:15.348655  491960 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 00:11:15.348714  491960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199484
	I1212 00:11:15.348659  491960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199484
	I1212 00:11:15.376209  491960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/addons-199484/id_rsa Username:docker}
	I1212 00:11:15.377214  491960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/addons-199484/id_rsa Username:docker}
	I1212 00:11:15.580882  491960 ssh_runner.go:195] Run: systemctl --version
	I1212 00:11:15.587054  491960 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 00:11:15.630104  491960 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 00:11:15.634167  491960 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 00:11:15.634290  491960 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 00:11:15.663112  491960 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1212 00:11:15.663137  491960 start.go:496] detecting cgroup driver to use...
	I1212 00:11:15.663170  491960 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 00:11:15.663223  491960 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 00:11:15.680144  491960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 00:11:15.692958  491960 docker.go:218] disabling cri-docker service (if available) ...
	I1212 00:11:15.693019  491960 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 00:11:15.711808  491960 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 00:11:15.730188  491960 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 00:11:15.855113  491960 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 00:11:15.981273  491960 docker.go:234] disabling docker service ...
	I1212 00:11:15.981341  491960 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 00:11:16.005759  491960 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 00:11:16.020025  491960 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 00:11:16.138918  491960 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 00:11:16.260150  491960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 00:11:16.273038  491960 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 00:11:16.286950  491960 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1212 00:11:16.287067  491960 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:11:16.295770  491960 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 00:11:16.295882  491960 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:11:16.304994  491960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:11:16.313650  491960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:11:16.322377  491960 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 00:11:16.330602  491960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:11:16.339298  491960 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:11:16.352236  491960 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:11:16.361361  491960 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 00:11:16.368831  491960 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 00:11:16.376059  491960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:11:16.494476  491960 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 00:11:16.661327  491960 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 00:11:16.661439  491960 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 00:11:16.664944  491960 start.go:564] Will wait 60s for crictl version
	I1212 00:11:16.665045  491960 ssh_runner.go:195] Run: which crictl
	I1212 00:11:16.668323  491960 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 00:11:16.695203  491960 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1212 00:11:16.695295  491960 ssh_runner.go:195] Run: crio --version
	I1212 00:11:16.728514  491960 ssh_runner.go:195] Run: crio --version
	I1212 00:11:16.763836  491960 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1212 00:11:16.766754  491960 cli_runner.go:164] Run: docker network inspect addons-199484 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 00:11:16.782570  491960 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1212 00:11:16.786207  491960 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 00:11:16.796087  491960 kubeadm.go:884] updating cluster {Name:addons-199484 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-199484 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 00:11:16.796198  491960 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 00:11:16.796256  491960 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:11:16.834853  491960 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 00:11:16.834878  491960 crio.go:433] Images already preloaded, skipping extraction
	I1212 00:11:16.834935  491960 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:11:16.859703  491960 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 00:11:16.859723  491960 cache_images.go:86] Images are preloaded, skipping loading
	I1212 00:11:16.859731  491960 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1212 00:11:16.859822  491960 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-199484 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-199484 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 00:11:16.859901  491960 ssh_runner.go:195] Run: crio config
	I1212 00:11:16.933667  491960 cni.go:84] Creating CNI manager for ""
	I1212 00:11:16.933698  491960 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 00:11:16.933713  491960 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 00:11:16.933735  491960 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-199484 NodeName:addons-199484 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 00:11:16.933856  491960 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-199484"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 00:11:16.933930  491960 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1212 00:11:16.941353  491960 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 00:11:16.941440  491960 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 00:11:16.948553  491960 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1212 00:11:16.961156  491960 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 00:11:16.973843  491960 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1212 00:11:16.987350  491960 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1212 00:11:16.990745  491960 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 00:11:17.000743  491960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:11:17.112775  491960 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:11:17.137108  491960 certs.go:69] Setting up /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/addons-199484 for IP: 192.168.49.2
	I1212 00:11:17.137140  491960 certs.go:195] generating shared ca certs ...
	I1212 00:11:17.137156  491960 certs.go:227] acquiring lock for ca certs: {Name:mk856824cf2126fa3d2975ef18e195b6ab1234f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:11:17.137312  491960 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22101-487723/.minikube/ca.key
	I1212 00:11:17.545149  491960 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-487723/.minikube/ca.crt ...
	I1212 00:11:17.545181  491960 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-487723/.minikube/ca.crt: {Name:mkc9e8c03ac146bc0b82eb43d2f9f0c2d520900a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:11:17.545373  491960 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-487723/.minikube/ca.key ...
	I1212 00:11:17.545386  491960 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-487723/.minikube/ca.key: {Name:mk0c335221059a76aadf8a9fd23566576e5fa774 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:11:17.545474  491960 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22101-487723/.minikube/proxy-client-ca.key
	I1212 00:11:17.718545  491960 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-487723/.minikube/proxy-client-ca.crt ...
	I1212 00:11:17.718575  491960 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-487723/.minikube/proxy-client-ca.crt: {Name:mk7b8eacc0ff96f20a5cd88df0ad0ddcb911fbd2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:11:17.718784  491960 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-487723/.minikube/proxy-client-ca.key ...
	I1212 00:11:17.718797  491960 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-487723/.minikube/proxy-client-ca.key: {Name:mkcb373edc21f86efa547cdc61540c464c7ee641 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:11:17.718884  491960 certs.go:257] generating profile certs ...
	I1212 00:11:17.718940  491960 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/addons-199484/client.key
	I1212 00:11:17.718958  491960 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/addons-199484/client.crt with IP's: []
	I1212 00:11:17.861464  491960 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/addons-199484/client.crt ...
	I1212 00:11:17.861494  491960 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/addons-199484/client.crt: {Name:mk0ed8c16fe2ae076e351bd642c46cd1523ae12f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:11:17.861670  491960 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/addons-199484/client.key ...
	I1212 00:11:17.861683  491960 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/addons-199484/client.key: {Name:mk9f1923a31411f739dd491cc954c17da22960c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:11:17.861764  491960 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/addons-199484/apiserver.key.e9b1064f
	I1212 00:11:17.861786  491960 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/addons-199484/apiserver.crt.e9b1064f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1212 00:11:18.125621  491960 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/addons-199484/apiserver.crt.e9b1064f ...
	I1212 00:11:18.125654  491960 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/addons-199484/apiserver.crt.e9b1064f: {Name:mk3c9597be8ae0c0b1e17754be606142bffc8b12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:11:18.125839  491960 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/addons-199484/apiserver.key.e9b1064f ...
	I1212 00:11:18.125853  491960 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/addons-199484/apiserver.key.e9b1064f: {Name:mk3131f100cd68e2ae2c35057b05df204d041bcc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:11:18.125942  491960 certs.go:382] copying /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/addons-199484/apiserver.crt.e9b1064f -> /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/addons-199484/apiserver.crt
	I1212 00:11:18.126030  491960 certs.go:386] copying /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/addons-199484/apiserver.key.e9b1064f -> /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/addons-199484/apiserver.key
	I1212 00:11:18.126082  491960 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/addons-199484/proxy-client.key
	I1212 00:11:18.126102  491960 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/addons-199484/proxy-client.crt with IP's: []
	I1212 00:11:18.811836  491960 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/addons-199484/proxy-client.crt ...
	I1212 00:11:18.811868  491960 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/addons-199484/proxy-client.crt: {Name:mk5b01029265356fd9c38f5ba3fd7dc73f714a38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:11:18.812051  491960 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/addons-199484/proxy-client.key ...
	I1212 00:11:18.812066  491960 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/addons-199484/proxy-client.key: {Name:mk1b394b95f3b182c866f6a971a134dfd92db86a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:11:18.812261  491960 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca-key.pem (1679 bytes)
	I1212 00:11:18.812309  491960 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca.pem (1078 bytes)
	I1212 00:11:18.812341  491960 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/cert.pem (1123 bytes)
	I1212 00:11:18.812377  491960 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/key.pem (1679 bytes)
	I1212 00:11:18.812982  491960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 00:11:18.831984  491960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 00:11:18.850205  491960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 00:11:18.868456  491960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 00:11:18.885961  491960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/addons-199484/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1212 00:11:18.903301  491960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/addons-199484/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 00:11:18.920680  491960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/addons-199484/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 00:11:18.937845  491960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/addons-199484/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 00:11:18.955002  491960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 00:11:18.972638  491960 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 00:11:18.986672  491960 ssh_runner.go:195] Run: openssl version
	I1212 00:11:18.992890  491960 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:11:19.000328  491960 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 00:11:19.009300  491960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:11:19.013133  491960 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 00:11 /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:11:19.013207  491960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:11:19.054156  491960 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 00:11:19.061544  491960 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1212 00:11:19.069325  491960 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 00:11:19.072751  491960 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1212 00:11:19.072800  491960 kubeadm.go:401] StartCluster: {Name:addons-199484 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-199484 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:11:19.072883  491960 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 00:11:19.072946  491960 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 00:11:19.098304  491960 cri.go:89] found id: ""
	I1212 00:11:19.098371  491960 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 00:11:19.106041  491960 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 00:11:19.113502  491960 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 00:11:19.113594  491960 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 00:11:19.121100  491960 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 00:11:19.121121  491960 kubeadm.go:158] found existing configuration files:
	
	I1212 00:11:19.121179  491960 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 00:11:19.128779  491960 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 00:11:19.128847  491960 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 00:11:19.136076  491960 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 00:11:19.143613  491960 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 00:11:19.143703  491960 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 00:11:19.151115  491960 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 00:11:19.158650  491960 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 00:11:19.158772  491960 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 00:11:19.165980  491960 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 00:11:19.173669  491960 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 00:11:19.173851  491960 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 00:11:19.181219  491960 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 00:11:19.223236  491960 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1212 00:11:19.223621  491960 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 00:11:19.246992  491960 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 00:11:19.247068  491960 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1212 00:11:19.247116  491960 kubeadm.go:319] OS: Linux
	I1212 00:11:19.247166  491960 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 00:11:19.247218  491960 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 00:11:19.247268  491960 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 00:11:19.247319  491960 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 00:11:19.247370  491960 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 00:11:19.247422  491960 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 00:11:19.247471  491960 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 00:11:19.247522  491960 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 00:11:19.247571  491960 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 00:11:19.318389  491960 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 00:11:19.318503  491960 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 00:11:19.318606  491960 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 00:11:19.335908  491960 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 00:11:19.342744  491960 out.go:252]   - Generating certificates and keys ...
	I1212 00:11:19.342873  491960 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 00:11:19.342960  491960 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 00:11:19.404492  491960 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 00:11:20.388610  491960 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1212 00:11:21.086529  491960 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1212 00:11:22.397741  491960 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1212 00:11:22.736326  491960 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1212 00:11:22.736501  491960 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-199484 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1212 00:11:23.161487  491960 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1212 00:11:23.161628  491960 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-199484 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1212 00:11:23.666327  491960 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 00:11:25.200235  491960 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 00:11:26.033358  491960 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1212 00:11:26.033484  491960 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 00:11:26.656649  491960 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 00:11:27.002645  491960 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 00:11:27.375747  491960 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 00:11:27.436330  491960 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 00:11:28.355907  491960 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 00:11:28.356916  491960 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 00:11:28.360096  491960 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 00:11:28.365478  491960 out.go:252]   - Booting up control plane ...
	I1212 00:11:28.365581  491960 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 00:11:28.365659  491960 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 00:11:28.365725  491960 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 00:11:28.380495  491960 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 00:11:28.380612  491960 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 00:11:28.387824  491960 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 00:11:28.388129  491960 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 00:11:28.388398  491960 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 00:11:28.512170  491960 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 00:11:28.512299  491960 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 00:11:29.512420  491960 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001118148s
	I1212 00:11:29.516152  491960 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1212 00:11:29.516255  491960 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1212 00:11:29.516351  491960 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1212 00:11:29.516451  491960 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1212 00:11:33.484524  491960 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.967434247s
	I1212 00:11:35.038209  491960 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.522033833s
	I1212 00:11:36.019811  491960 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.502878073s
	I1212 00:11:36.076596  491960 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 00:11:36.095166  491960 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 00:11:36.113396  491960 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 00:11:36.113822  491960 kubeadm.go:319] [mark-control-plane] Marking the node addons-199484 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 00:11:36.129172  491960 kubeadm.go:319] [bootstrap-token] Using token: 7jijhb.2uwctot57jgsdbqp
	I1212 00:11:36.132221  491960 out.go:252]   - Configuring RBAC rules ...
	I1212 00:11:36.132348  491960 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 00:11:36.141975  491960 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 00:11:36.152365  491960 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 00:11:36.157591  491960 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 00:11:36.162053  491960 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 00:11:36.169603  491960 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 00:11:36.430702  491960 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 00:11:36.867398  491960 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1212 00:11:37.433176  491960 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1212 00:11:37.434284  491960 kubeadm.go:319] 
	I1212 00:11:37.434366  491960 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1212 00:11:37.434376  491960 kubeadm.go:319] 
	I1212 00:11:37.434454  491960 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1212 00:11:37.434463  491960 kubeadm.go:319] 
	I1212 00:11:37.434494  491960 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1212 00:11:37.434575  491960 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 00:11:37.434642  491960 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 00:11:37.434648  491960 kubeadm.go:319] 
	I1212 00:11:37.434723  491960 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1212 00:11:37.434728  491960 kubeadm.go:319] 
	I1212 00:11:37.434775  491960 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 00:11:37.434778  491960 kubeadm.go:319] 
	I1212 00:11:37.434830  491960 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1212 00:11:37.434905  491960 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 00:11:37.434973  491960 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 00:11:37.434977  491960 kubeadm.go:319] 
	I1212 00:11:37.435061  491960 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 00:11:37.435154  491960 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1212 00:11:37.435158  491960 kubeadm.go:319] 
	I1212 00:11:37.435244  491960 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 7jijhb.2uwctot57jgsdbqp \
	I1212 00:11:37.435346  491960 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ce44b751a4efdc166c14df937c34bee22cb46fcbf4350caae3257de1fd27835c \
	I1212 00:11:37.435367  491960 kubeadm.go:319] 	--control-plane 
	I1212 00:11:37.435370  491960 kubeadm.go:319] 
	I1212 00:11:37.435455  491960 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1212 00:11:37.435458  491960 kubeadm.go:319] 
	I1212 00:11:37.435540  491960 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 7jijhb.2uwctot57jgsdbqp \
	I1212 00:11:37.435644  491960 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ce44b751a4efdc166c14df937c34bee22cb46fcbf4350caae3257de1fd27835c 
	I1212 00:11:37.439213  491960 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1212 00:11:37.439442  491960 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1212 00:11:37.439547  491960 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 00:11:37.439564  491960 cni.go:84] Creating CNI manager for ""
	I1212 00:11:37.439572  491960 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 00:11:37.444438  491960 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1212 00:11:37.447320  491960 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1212 00:11:37.451440  491960 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1212 00:11:37.451465  491960 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1212 00:11:37.465588  491960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1212 00:11:37.750954  491960 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 00:11:37.751094  491960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:11:37.751176  491960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-199484 minikube.k8s.io/updated_at=2025_12_12T00_11_37_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c04ca15b4c226075dd018d362cd996ac712bf2c0 minikube.k8s.io/name=addons-199484 minikube.k8s.io/primary=true
	I1212 00:11:37.768307  491960 ops.go:34] apiserver oom_adj: -16
	I1212 00:11:37.925151  491960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:11:38.425749  491960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:11:38.925437  491960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:11:39.425296  491960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:11:39.925511  491960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:11:40.425317  491960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:11:40.925342  491960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:11:41.425921  491960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:11:41.925269  491960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:11:42.425236  491960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:11:42.520295  491960 kubeadm.go:1114] duration metric: took 4.769249486s to wait for elevateKubeSystemPrivileges
	I1212 00:11:42.520322  491960 kubeadm.go:403] duration metric: took 23.447525529s to StartCluster
	I1212 00:11:42.520339  491960 settings.go:142] acquiring lock: {Name:mk274c10b2238dc32d72b68ac2e1ec517b8a72b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:11:42.520447  491960 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22101-487723/kubeconfig
	I1212 00:11:42.520840  491960 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-487723/kubeconfig: {Name:mk40d877648a1b47389942ad828ec218ac64f642 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:11:42.521022  491960 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 00:11:42.521240  491960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 00:11:42.521399  491960 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1212 00:11:42.521500  491960 addons.go:70] Setting yakd=true in profile "addons-199484"
	I1212 00:11:42.521518  491960 addons.go:239] Setting addon yakd=true in "addons-199484"
	I1212 00:11:42.521552  491960 host.go:66] Checking if "addons-199484" exists ...
	I1212 00:11:42.521611  491960 config.go:182] Loaded profile config "addons-199484": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:11:42.521664  491960 addons.go:70] Setting inspektor-gadget=true in profile "addons-199484"
	I1212 00:11:42.521689  491960 addons.go:239] Setting addon inspektor-gadget=true in "addons-199484"
	I1212 00:11:42.521731  491960 host.go:66] Checking if "addons-199484" exists ...
	I1212 00:11:42.522100  491960 cli_runner.go:164] Run: docker container inspect addons-199484 --format={{.State.Status}}
	I1212 00:11:42.522432  491960 cli_runner.go:164] Run: docker container inspect addons-199484 --format={{.State.Status}}
	I1212 00:11:42.523006  491960 addons.go:70] Setting metrics-server=true in profile "addons-199484"
	I1212 00:11:42.523032  491960 addons.go:239] Setting addon metrics-server=true in "addons-199484"
	I1212 00:11:42.523057  491960 host.go:66] Checking if "addons-199484" exists ...
	I1212 00:11:42.523521  491960 cli_runner.go:164] Run: docker container inspect addons-199484 --format={{.State.Status}}
	I1212 00:11:42.523789  491960 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-199484"
	I1212 00:11:42.523913  491960 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-199484"
	I1212 00:11:42.523959  491960 host.go:66] Checking if "addons-199484" exists ...
	I1212 00:11:42.524575  491960 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-199484"
	I1212 00:11:42.524627  491960 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-199484"
	I1212 00:11:42.524663  491960 host.go:66] Checking if "addons-199484" exists ...
	I1212 00:11:42.525152  491960 cli_runner.go:164] Run: docker container inspect addons-199484 --format={{.State.Status}}
	I1212 00:11:42.525705  491960 cli_runner.go:164] Run: docker container inspect addons-199484 --format={{.State.Status}}
	I1212 00:11:42.525714  491960 addons.go:70] Setting registry=true in profile "addons-199484"
	I1212 00:11:42.542879  491960 addons.go:239] Setting addon registry=true in "addons-199484"
	I1212 00:11:42.542927  491960 host.go:66] Checking if "addons-199484" exists ...
	I1212 00:11:42.543399  491960 cli_runner.go:164] Run: docker container inspect addons-199484 --format={{.State.Status}}
	I1212 00:11:42.525724  491960 addons.go:70] Setting registry-creds=true in profile "addons-199484"
	I1212 00:11:42.550782  491960 addons.go:239] Setting addon registry-creds=true in "addons-199484"
	I1212 00:11:42.550831  491960 host.go:66] Checking if "addons-199484" exists ...
	I1212 00:11:42.551329  491960 cli_runner.go:164] Run: docker container inspect addons-199484 --format={{.State.Status}}
	I1212 00:11:42.525732  491960 addons.go:70] Setting storage-provisioner=true in profile "addons-199484"
	I1212 00:11:42.558846  491960 addons.go:239] Setting addon storage-provisioner=true in "addons-199484"
	I1212 00:11:42.558887  491960 host.go:66] Checking if "addons-199484" exists ...
	I1212 00:11:42.525736  491960 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-199484"
	I1212 00:11:42.559124  491960 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-199484"
	I1212 00:11:42.559377  491960 cli_runner.go:164] Run: docker container inspect addons-199484 --format={{.State.Status}}
	I1212 00:11:42.525739  491960 addons.go:70] Setting volcano=true in profile "addons-199484"
	I1212 00:11:42.585636  491960 addons.go:239] Setting addon volcano=true in "addons-199484"
	I1212 00:11:42.585683  491960 host.go:66] Checking if "addons-199484" exists ...
	I1212 00:11:42.586188  491960 cli_runner.go:164] Run: docker container inspect addons-199484 --format={{.State.Status}}
	I1212 00:11:42.589814  491960 cli_runner.go:164] Run: docker container inspect addons-199484 --format={{.State.Status}}
	I1212 00:11:42.525746  491960 addons.go:70] Setting volumesnapshots=true in profile "addons-199484"
	I1212 00:11:42.616035  491960 addons.go:239] Setting addon volumesnapshots=true in "addons-199484"
	I1212 00:11:42.616089  491960 host.go:66] Checking if "addons-199484" exists ...
	I1212 00:11:42.525771  491960 out.go:179] * Verifying Kubernetes components...
	I1212 00:11:42.525983  491960 addons.go:70] Setting gcp-auth=true in profile "addons-199484"
	I1212 00:11:42.525990  491960 addons.go:70] Setting cloud-spanner=true in profile "addons-199484"
	I1212 00:11:42.525994  491960 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-199484"
	I1212 00:11:42.525997  491960 addons.go:70] Setting default-storageclass=true in profile "addons-199484"
	I1212 00:11:42.638889  491960 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-199484"
	I1212 00:11:42.639504  491960 cli_runner.go:164] Run: docker container inspect addons-199484 --format={{.State.Status}}
	I1212 00:11:42.526011  491960 addons.go:70] Setting ingress-dns=true in profile "addons-199484"
	I1212 00:11:42.653599  491960 addons.go:239] Setting addon ingress-dns=true in "addons-199484"
	I1212 00:11:42.653657  491960 host.go:66] Checking if "addons-199484" exists ...
	I1212 00:11:42.654128  491960 cli_runner.go:164] Run: docker container inspect addons-199484 --format={{.State.Status}}
	I1212 00:11:42.654286  491960 addons.go:239] Setting addon cloud-spanner=true in "addons-199484"
	I1212 00:11:42.654319  491960 host.go:66] Checking if "addons-199484" exists ...
	I1212 00:11:42.654910  491960 cli_runner.go:164] Run: docker container inspect addons-199484 --format={{.State.Status}}
	I1212 00:11:42.526016  491960 addons.go:70] Setting ingress=true in profile "addons-199484"
	I1212 00:11:42.665216  491960 addons.go:239] Setting addon ingress=true in "addons-199484"
	I1212 00:11:42.665262  491960 host.go:66] Checking if "addons-199484" exists ...
	I1212 00:11:42.665735  491960 cli_runner.go:164] Run: docker container inspect addons-199484 --format={{.State.Status}}
	I1212 00:11:42.670463  491960 cli_runner.go:164] Run: docker container inspect addons-199484 --format={{.State.Status}}
	I1212 00:11:42.687911  491960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:11:42.688176  491960 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1212 00:11:42.696958  491960 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1212 00:11:42.697012  491960 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1212 00:11:42.697029  491960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1212 00:11:42.697095  491960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199484
	I1212 00:11:42.699321  491960 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-199484"
	I1212 00:11:42.699373  491960 host.go:66] Checking if "addons-199484" exists ...
	I1212 00:11:42.699850  491960 cli_runner.go:164] Run: docker container inspect addons-199484 --format={{.State.Status}}
	I1212 00:11:42.735108  491960 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1212 00:11:42.735239  491960 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1212 00:11:42.738214  491960 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1212 00:11:42.738244  491960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1212 00:11:42.738314  491960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199484
	I1212 00:11:42.738574  491960 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1212 00:11:42.738606  491960 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1212 00:11:42.738655  491960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199484
	I1212 00:11:42.745360  491960 mustload.go:66] Loading cluster: addons-199484
	I1212 00:11:42.745583  491960 config.go:182] Loaded profile config "addons-199484": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:11:42.745927  491960 cli_runner.go:164] Run: docker container inspect addons-199484 --format={{.State.Status}}
	I1212 00:11:42.749732  491960 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1212 00:11:42.752758  491960 out.go:179]   - Using image docker.io/registry:3.0.0
	I1212 00:11:42.755921  491960 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1212 00:11:42.755977  491960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1212 00:11:42.756043  491960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199484
	I1212 00:11:42.789120  491960 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1212 00:11:42.789146  491960 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1212 00:11:42.789321  491960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199484
	I1212 00:11:42.801702  491960 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1212 00:11:42.804586  491960 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1212 00:11:42.804611  491960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1212 00:11:42.804688  491960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199484
	I1212 00:11:42.810109  491960 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-199484"
	I1212 00:11:42.810150  491960 host.go:66] Checking if "addons-199484" exists ...
	I1212 00:11:42.815097  491960 cli_runner.go:164] Run: docker container inspect addons-199484 --format={{.State.Status}}
	I1212 00:11:42.827309  491960 addons.go:239] Setting addon default-storageclass=true in "addons-199484"
	I1212 00:11:42.827372  491960 host.go:66] Checking if "addons-199484" exists ...
	I1212 00:11:42.828789  491960 cli_runner.go:164] Run: docker container inspect addons-199484 --format={{.State.Status}}
	I1212 00:11:42.833801  491960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1212 00:11:42.837130  491960 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I1212 00:11:42.840279  491960 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1212 00:11:42.845705  491960 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1212 00:11:42.845966  491960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/addons-199484/id_rsa Username:docker}
	I1212 00:11:42.847653  491960 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1212 00:11:42.847765  491960 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1212 00:11:42.849035  491960 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1212 00:11:42.868761  491960 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1212 00:11:42.876555  491960 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1212 00:11:42.878831  491960 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1212 00:11:42.878889  491960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1212 00:11:42.879028  491960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199484
	I1212 00:11:42.881381  491960 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:11:42.881402  491960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 00:11:42.881464  491960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199484
	I1212 00:11:42.893893  491960 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1212 00:11:42.893919  491960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1212 00:11:42.893998  491960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199484
	I1212 00:11:42.914942  491960 host.go:66] Checking if "addons-199484" exists ...
	I1212 00:11:42.922480  491960 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1212 00:11:42.922503  491960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1212 00:11:42.922569  491960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199484
	I1212 00:11:42.928295  491960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/addons-199484/id_rsa Username:docker}
	I1212 00:11:42.928869  491960 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1212 00:11:42.928883  491960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1212 00:11:42.928973  491960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199484
	I1212 00:11:42.956803  491960 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1212 00:11:42.959786  491960 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1212 00:11:42.959815  491960 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1212 00:11:42.959893  491960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199484
	I1212 00:11:42.978310  491960 out.go:179]   - Using image docker.io/busybox:stable
	I1212 00:11:43.037487  491960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/addons-199484/id_rsa Username:docker}
	I1212 00:11:43.042842  491960 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 00:11:43.042919  491960 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 00:11:43.042997  491960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199484
	I1212 00:11:43.057579  491960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/addons-199484/id_rsa Username:docker}
	I1212 00:11:43.058589  491960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/addons-199484/id_rsa Username:docker}
	I1212 00:11:43.060926  491960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/addons-199484/id_rsa Username:docker}
	I1212 00:11:43.065888  491960 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1212 00:11:43.066042  491960 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1212 00:11:43.069056  491960 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1212 00:11:43.069209  491960 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1212 00:11:43.069245  491960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1212 00:11:43.069313  491960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199484
	I1212 00:11:43.099300  491960 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1212 00:11:43.105696  491960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/addons-199484/id_rsa Username:docker}
	I1212 00:11:43.117390  491960 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1212 00:11:43.119358  491960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/addons-199484/id_rsa Username:docker}
	I1212 00:11:43.131480  491960 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1212 00:11:43.135235  491960 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1212 00:11:43.137110  491960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/addons-199484/id_rsa Username:docker}
	I1212 00:11:43.142912  491960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/addons-199484/id_rsa Username:docker}
	I1212 00:11:43.146771  491960 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1212 00:11:43.149828  491960 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1212 00:11:43.154830  491960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/addons-199484/id_rsa Username:docker}
	I1212 00:11:43.155357  491960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/addons-199484/id_rsa Username:docker}
	I1212 00:11:43.156524  491960 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1212 00:11:43.156540  491960 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1212 00:11:43.156599  491960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199484
	I1212 00:11:43.188583  491960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/addons-199484/id_rsa Username:docker}
	I1212 00:11:43.195686  491960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/addons-199484/id_rsa Username:docker}
	I1212 00:11:43.206940  491960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/addons-199484/id_rsa Username:docker}
	I1212 00:11:43.332670  491960 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:11:43.637987  491960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1212 00:11:43.761965  491960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1212 00:11:43.767424  491960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:11:43.822889  491960 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1212 00:11:43.822913  491960 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1212 00:11:43.864020  491960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1212 00:11:43.866134  491960 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1212 00:11:43.866154  491960 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1212 00:11:43.868710  491960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1212 00:11:43.881640  491960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1212 00:11:43.894134  491960 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1212 00:11:43.894205  491960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1212 00:11:43.901466  491960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1212 00:11:43.910813  491960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:11:43.925850  491960 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1212 00:11:43.925923  491960 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1212 00:11:43.943147  491960 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1212 00:11:43.943172  491960 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1212 00:11:43.974426  491960 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1212 00:11:43.974461  491960 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1212 00:11:44.004307  491960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1212 00:11:44.040983  491960 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1212 00:11:44.041009  491960 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1212 00:11:44.046163  491960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1212 00:11:44.054578  491960 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1212 00:11:44.054603  491960 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1212 00:11:44.078608  491960 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1212 00:11:44.078633  491960 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1212 00:11:44.145014  491960 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1212 00:11:44.145037  491960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1212 00:11:44.155324  491960 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1212 00:11:44.155357  491960 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1212 00:11:44.158060  491960 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1212 00:11:44.158088  491960 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1212 00:11:44.286948  491960 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 00:11:44.286981  491960 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1212 00:11:44.291467  491960 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1212 00:11:44.291498  491960 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1212 00:11:44.293147  491960 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1212 00:11:44.293164  491960 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1212 00:11:44.379664  491960 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1212 00:11:44.379690  491960 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1212 00:11:44.449672  491960 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1212 00:11:44.449704  491960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1212 00:11:44.471382  491960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1212 00:11:44.499970  491960 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1212 00:11:44.499992  491960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1212 00:11:44.554408  491960 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1212 00:11:44.554441  491960 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1212 00:11:44.604264  491960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 00:11:44.659887  491960 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.82604916s)
	I1212 00:11:44.659927  491960 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1212 00:11:44.659990  491960 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.327292341s)
	I1212 00:11:44.660745  491960 node_ready.go:35] waiting up to 6m0s for node "addons-199484" to be "Ready" ...
	I1212 00:11:44.663857  491960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1212 00:11:44.731784  491960 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1212 00:11:44.731811  491960 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1212 00:11:44.822285  491960 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1212 00:11:44.822308  491960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1212 00:11:44.839776  491960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1212 00:11:44.944551  491960 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1212 00:11:44.944577  491960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1212 00:11:45.166583  491960 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-199484" context rescaled to 1 replicas
	I1212 00:11:45.287199  491960 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1212 00:11:45.287227  491960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1212 00:11:45.540098  491960 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1212 00:11:45.540123  491960 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1212 00:11:45.761306  491960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1212 00:11:46.679345  491960 node_ready.go:57] node "addons-199484" has "Ready":"False" status (will retry)
	I1212 00:11:47.880217  491960 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (4.242188202s)
	I1212 00:11:47.880373  491960 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (4.11838341s)
	I1212 00:11:47.880415  491960 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.112968295s)
	I1212 00:11:47.880477  491960 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.016432642s)
	I1212 00:11:48.957162  491960 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.088375886s)
	I1212 00:11:48.957383  491960 addons.go:495] Verifying addon ingress=true in "addons-199484"
	I1212 00:11:48.957242  491960 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.075531823s)
	I1212 00:11:48.957251  491960 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (5.055702651s)
	I1212 00:11:48.957261  491960 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.046382264s)
	I1212 00:11:48.957268  491960 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.952937246s)
	I1212 00:11:48.957277  491960 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.911091312s)
	I1212 00:11:48.957286  491960 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.485881397s)
	I1212 00:11:48.957703  491960 addons.go:495] Verifying addon registry=true in "addons-199484"
	I1212 00:11:48.957308  491960 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.353006501s)
	I1212 00:11:48.958219  491960 addons.go:495] Verifying addon metrics-server=true in "addons-199484"
	I1212 00:11:48.957319  491960 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.293435573s)
	I1212 00:11:48.957339  491960 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.117529464s)
	W1212 00:11:48.958296  491960 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1212 00:11:48.958315  491960 retry.go:31] will retry after 371.995665ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1212 00:11:48.961987  491960 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-199484 service yakd-dashboard -n yakd-dashboard
	
	I1212 00:11:48.962142  491960 out.go:179] * Verifying registry addon...
	I1212 00:11:48.962187  491960 out.go:179] * Verifying ingress addon...
	I1212 00:11:48.966986  491960 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1212 00:11:48.968012  491960 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	W1212 00:11:48.977809  491960 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1212 00:11:48.988384  491960 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1212 00:11:48.988416  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:11:48.988578  491960 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1212 00:11:48.988603  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1212 00:11:49.175793  491960 node_ready.go:57] node "addons-199484" has "Ready":"False" status (will retry)
	I1212 00:11:49.223412  491960 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.462062388s)
	I1212 00:11:49.223448  491960 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-199484"
	I1212 00:11:49.228466  491960 out.go:179] * Verifying csi-hostpath-driver addon...
	I1212 00:11:49.232229  491960 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1212 00:11:49.239465  491960 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1212 00:11:49.239493  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:11:49.331173  491960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1212 00:11:49.472603  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:11:49.473419  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:11:49.735811  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:11:49.972318  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:11:49.973174  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:11:50.236271  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:11:50.471417  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:11:50.471577  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:11:50.526758  491960 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1212 00:11:50.526861  491960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199484
	I1212 00:11:50.543016  491960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/addons-199484/id_rsa Username:docker}
	I1212 00:11:50.660174  491960 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1212 00:11:50.674649  491960 addons.go:239] Setting addon gcp-auth=true in "addons-199484"
	I1212 00:11:50.674724  491960 host.go:66] Checking if "addons-199484" exists ...
	I1212 00:11:50.675192  491960 cli_runner.go:164] Run: docker container inspect addons-199484 --format={{.State.Status}}
	I1212 00:11:50.692885  491960 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1212 00:11:50.692934  491960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199484
	I1212 00:11:50.710451  491960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/addons-199484/id_rsa Username:docker}
	I1212 00:11:50.735962  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:11:50.970545  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:11:50.975557  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:11:51.235584  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:11:51.471140  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:11:51.471241  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1212 00:11:51.669481  491960 node_ready.go:57] node "addons-199484" has "Ready":"False" status (will retry)
	I1212 00:11:51.739567  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:11:51.972326  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:11:51.973190  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:11:52.020579  491960 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.689354521s)
	I1212 00:11:52.020600  491960 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.327690125s)
	I1212 00:11:52.023538  491960 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1212 00:11:52.026389  491960 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1212 00:11:52.029183  491960 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1212 00:11:52.029212  491960 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1212 00:11:52.042802  491960 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1212 00:11:52.042866  491960 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1212 00:11:52.056509  491960 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1212 00:11:52.056534  491960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1212 00:11:52.070760  491960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1212 00:11:52.235851  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:11:52.477568  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:11:52.478291  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:11:52.570717  491960 addons.go:495] Verifying addon gcp-auth=true in "addons-199484"
	I1212 00:11:52.573900  491960 out.go:179] * Verifying gcp-auth addon...
	I1212 00:11:52.578411  491960 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1212 00:11:52.588791  491960 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1212 00:11:52.588833  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:11:52.737454  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:11:52.970523  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:11:52.971693  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:11:53.081919  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:11:53.235463  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:11:53.470297  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:11:53.470793  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:11:53.581826  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:11:53.735619  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:11:53.971637  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:11:53.971794  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:11:54.081794  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1212 00:11:54.163476  491960 node_ready.go:57] node "addons-199484" has "Ready":"False" status (will retry)
	I1212 00:11:54.235239  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:11:54.471382  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:11:54.472739  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:11:54.582032  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:11:54.735692  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:11:54.970845  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:11:54.971406  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:11:55.081330  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:11:55.236029  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:11:55.470065  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:11:55.471062  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:11:55.581821  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:11:55.735548  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:11:55.971142  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:11:55.971258  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:11:56.082303  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1212 00:11:56.164202  491960 node_ready.go:57] node "addons-199484" has "Ready":"False" status (will retry)
	I1212 00:11:56.235832  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:11:56.471132  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:11:56.471395  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:11:56.582177  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:11:56.735638  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:11:56.970959  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:11:56.971015  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:11:57.082140  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:11:57.235558  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:11:57.471061  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:11:57.472139  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:11:57.582368  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:11:57.735904  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:11:57.970122  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:11:57.971104  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:11:58.082370  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1212 00:11:58.164474  491960 node_ready.go:57] node "addons-199484" has "Ready":"False" status (will retry)
	I1212 00:11:58.235529  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:11:58.471602  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:11:58.471935  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:11:58.582069  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:11:58.735432  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:11:58.970980  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:11:58.971314  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:11:59.082211  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:11:59.235660  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:11:59.471599  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:11:59.472047  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:11:59.581733  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:11:59.735342  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:11:59.971614  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:11:59.971869  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:00.098185  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1212 00:12:00.164880  491960 node_ready.go:57] node "addons-199484" has "Ready":"False" status (will retry)
	I1212 00:12:00.243430  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:00.470445  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:00.472292  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:00.582321  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:00.735627  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:00.971351  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:00.971736  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:01.081664  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:01.235148  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:01.471182  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:01.471484  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:01.581153  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:01.735660  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:01.971404  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:01.972144  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:02.081834  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:02.235740  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:02.471413  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:02.471457  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:02.581258  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1212 00:12:02.664500  491960 node_ready.go:57] node "addons-199484" has "Ready":"False" status (will retry)
	I1212 00:12:02.735371  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:02.971013  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:02.971528  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:03.081488  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:03.235201  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:03.470675  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:03.471379  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:03.581246  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:03.735892  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:03.972229  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:03.972430  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:04.082388  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:04.235818  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:04.471087  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:04.471294  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:04.582083  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:04.736000  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:04.970297  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:04.971903  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:05.081681  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1212 00:12:05.164481  491960 node_ready.go:57] node "addons-199484" has "Ready":"False" status (will retry)
	I1212 00:12:05.235514  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:05.471225  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:05.471837  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:05.581589  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:05.735431  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:05.971454  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:05.971759  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:06.081931  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:06.235605  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:06.470304  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:06.471591  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:06.581566  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:06.735416  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:06.970397  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:06.971414  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:07.081243  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:07.235708  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:07.471220  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:07.471370  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:07.582406  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1212 00:12:07.664069  491960 node_ready.go:57] node "addons-199484" has "Ready":"False" status (will retry)
	I1212 00:12:07.736221  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:07.970353  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:07.971449  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:08.081541  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:08.236222  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:08.470159  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:08.471620  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:08.581408  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:08.736220  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:08.970483  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:08.970986  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:09.081905  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:09.235359  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:09.470587  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:09.471664  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:09.581458  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1212 00:12:09.664350  491960 node_ready.go:57] node "addons-199484" has "Ready":"False" status (will retry)
	I1212 00:12:09.735910  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:09.972110  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:09.972618  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:10.081670  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:10.235003  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:10.470111  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:10.471089  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:10.581811  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:10.735811  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:10.976440  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:10.976643  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:11.081384  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:11.242855  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:11.469904  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:11.470966  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:11.582065  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:11.735716  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:11.972243  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:11.972527  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:12.082255  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1212 00:12:12.164242  491960 node_ready.go:57] node "addons-199484" has "Ready":"False" status (will retry)
	I1212 00:12:12.235553  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:12.470540  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:12.471747  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:12.582068  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:12.735552  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:12.971666  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:12.972037  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:13.082032  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:13.235840  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:13.471058  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:13.471188  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:13.582039  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:13.735431  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:13.971106  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:13.971276  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:14.081259  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:14.235867  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:14.469700  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:14.471083  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:14.581815  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1212 00:12:14.663558  491960 node_ready.go:57] node "addons-199484" has "Ready":"False" status (will retry)
	I1212 00:12:14.735364  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:14.970557  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:14.971066  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:15.082284  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:15.236221  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:15.471368  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:15.471504  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:15.581514  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:15.735479  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:15.971921  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:15.972042  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:16.081908  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:16.235723  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:16.470936  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:16.471158  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:16.582106  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1212 00:12:16.663636  491960 node_ready.go:57] node "addons-199484" has "Ready":"False" status (will retry)
	I1212 00:12:16.735524  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:16.971531  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:16.971668  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:17.081459  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:17.236075  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:17.470035  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:17.471234  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:17.582135  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:17.735701  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:17.970853  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:17.971030  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:18.082626  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:18.236059  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:18.470222  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:18.471071  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:18.582025  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:18.735335  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:18.971120  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:18.971872  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:19.081896  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1212 00:12:19.163449  491960 node_ready.go:57] node "addons-199484" has "Ready":"False" status (will retry)
	I1212 00:12:19.235309  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:19.470445  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:19.471699  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:19.581610  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:19.735254  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:19.970402  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:19.971004  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:20.082069  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:20.235914  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:20.471012  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:20.471123  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:20.581555  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:20.735883  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:20.970762  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:20.970914  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:21.081713  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1212 00:12:21.163810  491960 node_ready.go:57] node "addons-199484" has "Ready":"False" status (will retry)
	I1212 00:12:21.235954  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:21.469750  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:21.470820  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:21.581682  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:21.735599  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:21.971855  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:21.971874  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:22.081491  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:22.235969  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:22.471551  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:22.471783  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:22.581715  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:22.735056  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:22.971719  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:22.971886  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:23.081820  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:23.248130  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:23.514483  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:23.514921  491960 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1212 00:12:23.514941  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:23.667575  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:23.694953  491960 node_ready.go:49] node "addons-199484" is "Ready"
	I1212 00:12:23.694987  491960 node_ready.go:38] duration metric: took 39.03421277s for node "addons-199484" to be "Ready" ...
	I1212 00:12:23.695034  491960 api_server.go:52] waiting for apiserver process to appear ...
	I1212 00:12:23.695120  491960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:12:23.740849  491960 api_server.go:72] duration metric: took 41.219799312s to wait for apiserver process to appear ...
	I1212 00:12:23.740871  491960 api_server.go:88] waiting for apiserver healthz status ...
	I1212 00:12:23.740889  491960 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1212 00:12:23.758606  491960 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1212 00:12:23.758625  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:23.761871  491960 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1212 00:12:23.763204  491960 api_server.go:141] control plane version: v1.34.2
	I1212 00:12:23.763234  491960 api_server.go:131] duration metric: took 22.356204ms to wait for apiserver health ...
	I1212 00:12:23.763243  491960 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 00:12:23.770261  491960 system_pods.go:59] 19 kube-system pods found
	I1212 00:12:23.770304  491960 system_pods.go:61] "coredns-66bc5c9577-jx5nq" [1441bde5-8d17-4f82-b545-2205e14d0ec2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 00:12:23.770311  491960 system_pods.go:61] "csi-hostpath-attacher-0" [c0602985-5c62-4851-a954-f6726d8e981d] Pending
	I1212 00:12:23.770316  491960 system_pods.go:61] "csi-hostpath-resizer-0" [e8a9d55a-83af-40e8-8cb5-e26e1b1a17c9] Pending
	I1212 00:12:23.770357  491960 system_pods.go:61] "csi-hostpathplugin-pwldg" [599bce84-aff5-4988-84d6-dc8737e43905] Pending
	I1212 00:12:23.770368  491960 system_pods.go:61] "etcd-addons-199484" [6faaa3a3-4bb7-4e75-b91a-4982ecb6c267] Running
	I1212 00:12:23.770372  491960 system_pods.go:61] "kindnet-5nsn6" [be8e608b-8151-43d8-8758-427ba1e34b96] Running
	I1212 00:12:23.770377  491960 system_pods.go:61] "kube-apiserver-addons-199484" [e78bc7f1-6c37-4bf1-82c1-9393394e6746] Running
	I1212 00:12:23.770380  491960 system_pods.go:61] "kube-controller-manager-addons-199484" [2c62e9d6-a02c-4f11-b22c-0b2e27111fe2] Running
	I1212 00:12:23.770385  491960 system_pods.go:61] "kube-ingress-dns-minikube" [fb6b5ab3-3661-4649-b69f-3fd26a0642c0] Pending
	I1212 00:12:23.770388  491960 system_pods.go:61] "kube-proxy-67nfx" [df3b1547-ea52-41f3-bd8d-c7c6cebfdb3e] Running
	I1212 00:12:23.770398  491960 system_pods.go:61] "kube-scheduler-addons-199484" [0f189033-4b7e-4e83-a065-43e54bac390a] Running
	I1212 00:12:23.770404  491960 system_pods.go:61] "metrics-server-85b7d694d7-mp4fx" [9036b134-5412-45fa-b9e5-a5e100672fb9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 00:12:23.770424  491960 system_pods.go:61] "nvidia-device-plugin-daemonset-4jhc7" [cb7157aa-9a23-4982-ad13-4bef3501f23d] Pending
	I1212 00:12:23.770439  491960 system_pods.go:61] "registry-6b586f9694-d69pq" [c6dc399c-e268-46e8-a3ea-8929470b439b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1212 00:12:23.770458  491960 system_pods.go:61] "registry-creds-764b6fb674-mf9j5" [7fa7165f-4d24-4504-b416-3d9ed68b1a7f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1212 00:12:23.770469  491960 system_pods.go:61] "registry-proxy-rj8pk" [c6c691b6-a31f-4583-bd46-b7cbd62968ed] Pending
	I1212 00:12:23.770477  491960 system_pods.go:61] "snapshot-controller-7d9fbc56b8-lxshs" [d74f74dd-2db5-463e-9f47-e5876c71de58] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1212 00:12:23.770489  491960 system_pods.go:61] "snapshot-controller-7d9fbc56b8-p7d72" [b7a508ac-02c6-4950-9190-2b77aa44f343] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1212 00:12:23.770508  491960 system_pods.go:61] "storage-provisioner" [ed62f61e-3d7c-4056-a513-06dc0821d283] Pending
	I1212 00:12:23.770523  491960 system_pods.go:74] duration metric: took 7.258555ms to wait for pod list to return data ...
	I1212 00:12:23.770533  491960 default_sa.go:34] waiting for default service account to be created ...
	I1212 00:12:23.788034  491960 default_sa.go:45] found service account: "default"
	I1212 00:12:23.788079  491960 default_sa.go:55] duration metric: took 17.535284ms for default service account to be created ...
	I1212 00:12:23.788091  491960 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 00:12:23.800452  491960 system_pods.go:86] 19 kube-system pods found
	I1212 00:12:23.800489  491960 system_pods.go:89] "coredns-66bc5c9577-jx5nq" [1441bde5-8d17-4f82-b545-2205e14d0ec2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 00:12:23.800496  491960 system_pods.go:89] "csi-hostpath-attacher-0" [c0602985-5c62-4851-a954-f6726d8e981d] Pending
	I1212 00:12:23.800502  491960 system_pods.go:89] "csi-hostpath-resizer-0" [e8a9d55a-83af-40e8-8cb5-e26e1b1a17c9] Pending
	I1212 00:12:23.800506  491960 system_pods.go:89] "csi-hostpathplugin-pwldg" [599bce84-aff5-4988-84d6-dc8737e43905] Pending
	I1212 00:12:23.800531  491960 system_pods.go:89] "etcd-addons-199484" [6faaa3a3-4bb7-4e75-b91a-4982ecb6c267] Running
	I1212 00:12:23.800544  491960 system_pods.go:89] "kindnet-5nsn6" [be8e608b-8151-43d8-8758-427ba1e34b96] Running
	I1212 00:12:23.800549  491960 system_pods.go:89] "kube-apiserver-addons-199484" [e78bc7f1-6c37-4bf1-82c1-9393394e6746] Running
	I1212 00:12:23.800554  491960 system_pods.go:89] "kube-controller-manager-addons-199484" [2c62e9d6-a02c-4f11-b22c-0b2e27111fe2] Running
	I1212 00:12:23.800565  491960 system_pods.go:89] "kube-ingress-dns-minikube" [fb6b5ab3-3661-4649-b69f-3fd26a0642c0] Pending
	I1212 00:12:23.800572  491960 system_pods.go:89] "kube-proxy-67nfx" [df3b1547-ea52-41f3-bd8d-c7c6cebfdb3e] Running
	I1212 00:12:23.800576  491960 system_pods.go:89] "kube-scheduler-addons-199484" [0f189033-4b7e-4e83-a065-43e54bac390a] Running
	I1212 00:12:23.800582  491960 system_pods.go:89] "metrics-server-85b7d694d7-mp4fx" [9036b134-5412-45fa-b9e5-a5e100672fb9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 00:12:23.800590  491960 system_pods.go:89] "nvidia-device-plugin-daemonset-4jhc7" [cb7157aa-9a23-4982-ad13-4bef3501f23d] Pending
	I1212 00:12:23.800616  491960 system_pods.go:89] "registry-6b586f9694-d69pq" [c6dc399c-e268-46e8-a3ea-8929470b439b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1212 00:12:23.800638  491960 system_pods.go:89] "registry-creds-764b6fb674-mf9j5" [7fa7165f-4d24-4504-b416-3d9ed68b1a7f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1212 00:12:23.800643  491960 system_pods.go:89] "registry-proxy-rj8pk" [c6c691b6-a31f-4583-bd46-b7cbd62968ed] Pending
	I1212 00:12:23.800650  491960 system_pods.go:89] "snapshot-controller-7d9fbc56b8-lxshs" [d74f74dd-2db5-463e-9f47-e5876c71de58] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1212 00:12:23.800663  491960 system_pods.go:89] "snapshot-controller-7d9fbc56b8-p7d72" [b7a508ac-02c6-4950-9190-2b77aa44f343] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1212 00:12:23.800667  491960 system_pods.go:89] "storage-provisioner" [ed62f61e-3d7c-4056-a513-06dc0821d283] Pending
	I1212 00:12:23.800689  491960 retry.go:31] will retry after 198.161172ms: missing components: kube-dns
	I1212 00:12:23.980385  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:23.980635  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:24.041468  491960 system_pods.go:86] 19 kube-system pods found
	I1212 00:12:24.041508  491960 system_pods.go:89] "coredns-66bc5c9577-jx5nq" [1441bde5-8d17-4f82-b545-2205e14d0ec2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 00:12:24.041515  491960 system_pods.go:89] "csi-hostpath-attacher-0" [c0602985-5c62-4851-a954-f6726d8e981d] Pending
	I1212 00:12:24.041557  491960 system_pods.go:89] "csi-hostpath-resizer-0" [e8a9d55a-83af-40e8-8cb5-e26e1b1a17c9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1212 00:12:24.041571  491960 system_pods.go:89] "csi-hostpathplugin-pwldg" [599bce84-aff5-4988-84d6-dc8737e43905] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1212 00:12:24.041577  491960 system_pods.go:89] "etcd-addons-199484" [6faaa3a3-4bb7-4e75-b91a-4982ecb6c267] Running
	I1212 00:12:24.041590  491960 system_pods.go:89] "kindnet-5nsn6" [be8e608b-8151-43d8-8758-427ba1e34b96] Running
	I1212 00:12:24.041594  491960 system_pods.go:89] "kube-apiserver-addons-199484" [e78bc7f1-6c37-4bf1-82c1-9393394e6746] Running
	I1212 00:12:24.041599  491960 system_pods.go:89] "kube-controller-manager-addons-199484" [2c62e9d6-a02c-4f11-b22c-0b2e27111fe2] Running
	I1212 00:12:24.041623  491960 system_pods.go:89] "kube-ingress-dns-minikube" [fb6b5ab3-3661-4649-b69f-3fd26a0642c0] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1212 00:12:24.041635  491960 system_pods.go:89] "kube-proxy-67nfx" [df3b1547-ea52-41f3-bd8d-c7c6cebfdb3e] Running
	I1212 00:12:24.041641  491960 system_pods.go:89] "kube-scheduler-addons-199484" [0f189033-4b7e-4e83-a065-43e54bac390a] Running
	I1212 00:12:24.041649  491960 system_pods.go:89] "metrics-server-85b7d694d7-mp4fx" [9036b134-5412-45fa-b9e5-a5e100672fb9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 00:12:24.041661  491960 system_pods.go:89] "nvidia-device-plugin-daemonset-4jhc7" [cb7157aa-9a23-4982-ad13-4bef3501f23d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1212 00:12:24.041669  491960 system_pods.go:89] "registry-6b586f9694-d69pq" [c6dc399c-e268-46e8-a3ea-8929470b439b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1212 00:12:24.041680  491960 system_pods.go:89] "registry-creds-764b6fb674-mf9j5" [7fa7165f-4d24-4504-b416-3d9ed68b1a7f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1212 00:12:24.041698  491960 system_pods.go:89] "registry-proxy-rj8pk" [c6c691b6-a31f-4583-bd46-b7cbd62968ed] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1212 00:12:24.041712  491960 system_pods.go:89] "snapshot-controller-7d9fbc56b8-lxshs" [d74f74dd-2db5-463e-9f47-e5876c71de58] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1212 00:12:24.041719  491960 system_pods.go:89] "snapshot-controller-7d9fbc56b8-p7d72" [b7a508ac-02c6-4950-9190-2b77aa44f343] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1212 00:12:24.041743  491960 system_pods.go:89] "storage-provisioner" [ed62f61e-3d7c-4056-a513-06dc0821d283] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 00:12:24.041758  491960 retry.go:31] will retry after 364.11001ms: missing components: kube-dns
	I1212 00:12:24.107145  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:24.253926  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:24.411110  491960 system_pods.go:86] 19 kube-system pods found
	I1212 00:12:24.411146  491960 system_pods.go:89] "coredns-66bc5c9577-jx5nq" [1441bde5-8d17-4f82-b545-2205e14d0ec2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 00:12:24.411157  491960 system_pods.go:89] "csi-hostpath-attacher-0" [c0602985-5c62-4851-a954-f6726d8e981d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1212 00:12:24.411204  491960 system_pods.go:89] "csi-hostpath-resizer-0" [e8a9d55a-83af-40e8-8cb5-e26e1b1a17c9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1212 00:12:24.411212  491960 system_pods.go:89] "csi-hostpathplugin-pwldg" [599bce84-aff5-4988-84d6-dc8737e43905] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1212 00:12:24.411223  491960 system_pods.go:89] "etcd-addons-199484" [6faaa3a3-4bb7-4e75-b91a-4982ecb6c267] Running
	I1212 00:12:24.411229  491960 system_pods.go:89] "kindnet-5nsn6" [be8e608b-8151-43d8-8758-427ba1e34b96] Running
	I1212 00:12:24.411234  491960 system_pods.go:89] "kube-apiserver-addons-199484" [e78bc7f1-6c37-4bf1-82c1-9393394e6746] Running
	I1212 00:12:24.411238  491960 system_pods.go:89] "kube-controller-manager-addons-199484" [2c62e9d6-a02c-4f11-b22c-0b2e27111fe2] Running
	I1212 00:12:24.411260  491960 system_pods.go:89] "kube-ingress-dns-minikube" [fb6b5ab3-3661-4649-b69f-3fd26a0642c0] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1212 00:12:24.411276  491960 system_pods.go:89] "kube-proxy-67nfx" [df3b1547-ea52-41f3-bd8d-c7c6cebfdb3e] Running
	I1212 00:12:24.411281  491960 system_pods.go:89] "kube-scheduler-addons-199484" [0f189033-4b7e-4e83-a065-43e54bac390a] Running
	I1212 00:12:24.411287  491960 system_pods.go:89] "metrics-server-85b7d694d7-mp4fx" [9036b134-5412-45fa-b9e5-a5e100672fb9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 00:12:24.411314  491960 system_pods.go:89] "nvidia-device-plugin-daemonset-4jhc7" [cb7157aa-9a23-4982-ad13-4bef3501f23d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1212 00:12:24.411322  491960 system_pods.go:89] "registry-6b586f9694-d69pq" [c6dc399c-e268-46e8-a3ea-8929470b439b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1212 00:12:24.411328  491960 system_pods.go:89] "registry-creds-764b6fb674-mf9j5" [7fa7165f-4d24-4504-b416-3d9ed68b1a7f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1212 00:12:24.411343  491960 system_pods.go:89] "registry-proxy-rj8pk" [c6c691b6-a31f-4583-bd46-b7cbd62968ed] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1212 00:12:24.411356  491960 system_pods.go:89] "snapshot-controller-7d9fbc56b8-lxshs" [d74f74dd-2db5-463e-9f47-e5876c71de58] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1212 00:12:24.411375  491960 system_pods.go:89] "snapshot-controller-7d9fbc56b8-p7d72" [b7a508ac-02c6-4950-9190-2b77aa44f343] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1212 00:12:24.411391  491960 system_pods.go:89] "storage-provisioner" [ed62f61e-3d7c-4056-a513-06dc0821d283] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 00:12:24.411411  491960 system_pods.go:126] duration metric: took 623.290756ms to wait for k8s-apps to be running ...
	I1212 00:12:24.411427  491960 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 00:12:24.411505  491960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:12:24.427503  491960 system_svc.go:56] duration metric: took 16.067385ms WaitForService to wait for kubelet
	I1212 00:12:24.427530  491960 kubeadm.go:587] duration metric: took 41.90648597s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 00:12:24.427573  491960 node_conditions.go:102] verifying NodePressure condition ...
	I1212 00:12:24.430566  491960 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1212 00:12:24.430597  491960 node_conditions.go:123] node cpu capacity is 2
	I1212 00:12:24.430613  491960 node_conditions.go:105] duration metric: took 3.026455ms to run NodePressure ...
	I1212 00:12:24.430641  491960 start.go:242] waiting for startup goroutines ...
	I1212 00:12:24.471708  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:24.473466  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:24.582030  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:24.736869  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:24.972828  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:24.972998  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:25.082615  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:25.235888  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:25.472532  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:25.473638  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:25.581677  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:25.736174  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:25.972360  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:25.972552  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:26.081347  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:26.240500  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:26.473026  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:26.473163  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:26.582122  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:26.735654  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:26.972692  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:26.973178  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:27.083076  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:27.236867  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:27.473790  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:27.473946  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:27.582836  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:27.736364  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:27.971311  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:27.971573  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:28.082207  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:28.237235  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:28.472198  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:28.472544  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:28.581533  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:28.736271  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:28.984335  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:28.984710  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:29.093918  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:29.236585  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:29.471169  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:29.471344  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:29.581848  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:29.736377  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:29.970078  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:29.972302  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:30.082979  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:30.236485  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:30.473315  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:30.473476  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:30.581414  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:30.735666  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:30.970849  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:30.971001  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:31.082836  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:31.236796  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:31.475869  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:31.476357  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:31.582007  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:31.736582  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:31.972796  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:31.973197  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:32.082636  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:32.236223  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:32.473367  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:32.473940  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:32.582834  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:32.736627  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:32.972425  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:32.974149  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:33.082579  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:33.236219  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:33.470521  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:33.471543  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:33.581166  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:33.736655  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:33.973895  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:33.974060  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:34.082606  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:34.237881  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:34.476128  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:34.477905  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:34.587864  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:34.737541  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:34.974141  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:34.974272  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:35.084027  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:35.237611  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:35.473442  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:35.474149  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:35.582425  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:35.736287  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:35.973452  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:35.973845  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:36.082518  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:36.237619  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:36.476018  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:36.476167  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:36.582898  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:36.737152  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:36.972249  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:36.973869  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:37.082234  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:37.235880  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:37.472539  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:37.472928  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:37.582877  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:37.736140  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:37.972080  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:37.973572  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:38.082083  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:38.240436  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:38.472013  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:38.472165  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:38.582514  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:38.735888  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:38.971566  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:38.973107  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:39.082485  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:39.236705  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:39.473951  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:39.474484  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:39.581174  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:39.735264  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:39.972044  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:39.972534  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:40.082412  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:40.236095  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:40.472723  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:40.472918  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:40.582407  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:40.737255  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:40.973001  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:40.973455  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:41.081790  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:41.242144  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:41.471948  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:41.472436  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:41.582435  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:41.736054  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:41.970533  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:41.971331  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:42.082438  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:42.237181  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:42.471555  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:42.473486  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:42.581672  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:42.736631  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:42.979285  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:42.979640  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:43.082073  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:43.235672  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:43.474053  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:43.474584  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:43.581862  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:43.736074  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:43.972911  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:43.973550  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:44.083075  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:44.237393  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:44.470972  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:44.471408  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:44.581409  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:44.768852  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:44.974394  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:44.974817  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:45.086425  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:45.236600  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:45.473954  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:45.474397  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:45.588221  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:45.736202  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:45.973310  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:45.973761  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:46.084227  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:46.237602  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:46.477338  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:46.477768  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:46.582264  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:46.736633  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:46.972858  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:46.973266  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:47.082825  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:47.236499  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:47.473319  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:47.473759  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:47.582826  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:47.736717  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:47.971998  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:47.972326  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:48.082745  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:48.237876  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:48.471676  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:48.472303  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:48.582569  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:48.736226  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:48.972526  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:48.972702  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:49.081844  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:49.236404  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:49.474513  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:49.474750  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:49.582070  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:49.737367  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:49.973860  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:49.973988  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:50.081824  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:50.235929  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:50.479970  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:50.480111  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:50.581838  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:50.736503  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:50.971196  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:50.971309  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:51.082407  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:51.235543  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:51.473045  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:51.473219  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:51.583010  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:51.736668  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:51.973107  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:51.973392  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:52.081404  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:52.235522  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:52.472032  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:52.473159  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:52.582808  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:52.736218  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:52.972784  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:52.973490  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:53.081558  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:53.236240  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:53.470020  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:53.472589  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:53.588057  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:53.736526  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:53.971976  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:53.972093  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:54.081891  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:54.235934  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:54.471189  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:54.472317  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:54.581585  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:54.736515  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:54.971245  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:54.971606  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:55.081442  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:55.236311  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:55.470504  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:55.471684  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:55.582514  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:55.735772  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:55.970761  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:55.970969  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:56.082423  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:56.235987  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:56.470484  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:56.473545  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:56.581385  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:56.736330  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:56.972621  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:56.973033  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:57.082004  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:57.236222  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:57.470399  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:57.473262  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:57.582971  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:57.736841  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:57.971101  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:57.972236  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:58.082435  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:58.242284  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:58.471103  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:58.472347  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:58.611524  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:58.736135  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:58.970742  491960 kapi.go:107] duration metric: took 1m10.003785164s to wait for kubernetes.io/minikube-addons=registry ...
	I1212 00:12:58.971384  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:59.082543  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:59.235571  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:59.472082  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:59.582498  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:59.736000  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:59.971742  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:00.104598  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:00.242494  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:00.472066  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:00.582441  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:00.735882  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:00.971780  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:01.082020  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:01.236517  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:01.472021  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:01.582331  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:01.736034  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:01.973818  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:02.081525  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:02.236461  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:02.472652  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:02.582169  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:02.736808  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:02.972522  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:03.081908  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:03.236231  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:03.471532  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:03.587546  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:03.736053  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:03.971465  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:04.082547  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:04.235798  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:04.471398  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:04.582771  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:04.736813  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:04.971589  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:05.082179  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:05.235204  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:05.471942  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:05.582512  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:05.736267  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:05.972515  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:06.081956  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:06.236612  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:06.476460  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:06.603009  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:06.737266  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:06.971759  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:07.081643  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:07.236066  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:07.471315  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:07.585124  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:07.736125  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:07.971803  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:08.083007  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:08.240392  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:08.473834  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:08.581659  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:08.737946  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:08.971308  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:09.103849  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:09.236958  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:09.474818  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:09.582672  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:09.740348  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:09.972075  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:10.083378  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:10.236791  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:10.472707  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:10.582491  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:10.736910  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:10.972175  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:11.082316  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:11.236266  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:11.471408  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:11.581672  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:11.736617  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:11.973466  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:12.082248  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:12.235788  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:12.471295  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:12.582760  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:12.736995  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:12.971852  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:13.082663  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:13.235893  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:13.471529  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:13.581982  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:13.736210  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:13.971413  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:14.082142  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:14.236368  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:14.471408  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:14.581327  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:14.735534  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:14.973480  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:15.082055  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:15.249070  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:15.471821  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:15.581932  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:15.736821  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:15.971926  491960 kapi.go:107] duration metric: took 1m27.003898399s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1212 00:13:16.082012  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:16.236260  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:16.631711  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:16.736093  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:17.082498  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:17.236401  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:17.581242  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:17.736187  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:18.081915  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:18.236763  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:18.583619  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:18.736207  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:19.081599  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:19.236445  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:19.581599  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:19.735858  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:20.081423  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:20.236375  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:20.581742  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:20.736174  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:21.083182  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:21.236627  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:21.581670  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:21.742046  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:22.084719  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:22.236093  491960 kapi.go:107] duration metric: took 1m33.003867734s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1212 00:13:22.581075  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:23.082179  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:23.582980  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:24.081780  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:24.582058  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:25.081646  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:25.581985  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:26.081989  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:26.581641  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:27.082570  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:27.582546  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:28.082719  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:28.582626  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:29.082803  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:29.583663  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:30.082925  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:30.582477  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:31.082585  491960 kapi.go:107] duration metric: took 1m38.504176513s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1212 00:13:31.085587  491960 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-199484 cluster.
	I1212 00:13:31.088469  491960 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1212 00:13:31.091368  491960 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1212 00:13:31.094266  491960 out.go:179] * Enabled addons: inspektor-gadget, registry-creds, storage-provisioner, cloud-spanner, nvidia-device-plugin, amd-gpu-device-plugin, ingress-dns, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I1212 00:13:31.097174  491960 addons.go:530] duration metric: took 1m48.575766633s for enable addons: enabled=[inspektor-gadget registry-creds storage-provisioner cloud-spanner nvidia-device-plugin amd-gpu-device-plugin ingress-dns metrics-server yakd storage-provisioner-rancher volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I1212 00:13:31.097236  491960 start.go:247] waiting for cluster config update ...
	I1212 00:13:31.097261  491960 start.go:256] writing updated cluster config ...
	I1212 00:13:31.097588  491960 ssh_runner.go:195] Run: rm -f paused
	I1212 00:13:31.103693  491960 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 00:13:31.109294  491960 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-jx5nq" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:13:31.115112  491960 pod_ready.go:94] pod "coredns-66bc5c9577-jx5nq" is "Ready"
	I1212 00:13:31.115144  491960 pod_ready.go:86] duration metric: took 5.811699ms for pod "coredns-66bc5c9577-jx5nq" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:13:31.117680  491960 pod_ready.go:83] waiting for pod "etcd-addons-199484" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:13:31.123385  491960 pod_ready.go:94] pod "etcd-addons-199484" is "Ready"
	I1212 00:13:31.123417  491960 pod_ready.go:86] duration metric: took 5.705207ms for pod "etcd-addons-199484" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:13:31.131121  491960 pod_ready.go:83] waiting for pod "kube-apiserver-addons-199484" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:13:31.136832  491960 pod_ready.go:94] pod "kube-apiserver-addons-199484" is "Ready"
	I1212 00:13:31.136864  491960 pod_ready.go:86] duration metric: took 5.713814ms for pod "kube-apiserver-addons-199484" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:13:31.139485  491960 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-199484" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:13:31.507295  491960 pod_ready.go:94] pod "kube-controller-manager-addons-199484" is "Ready"
	I1212 00:13:31.507322  491960 pod_ready.go:86] duration metric: took 367.809688ms for pod "kube-controller-manager-addons-199484" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:13:31.707749  491960 pod_ready.go:83] waiting for pod "kube-proxy-67nfx" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:13:32.108536  491960 pod_ready.go:94] pod "kube-proxy-67nfx" is "Ready"
	I1212 00:13:32.108561  491960 pod_ready.go:86] duration metric: took 400.7853ms for pod "kube-proxy-67nfx" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:13:32.307780  491960 pod_ready.go:83] waiting for pod "kube-scheduler-addons-199484" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:13:32.708339  491960 pod_ready.go:94] pod "kube-scheduler-addons-199484" is "Ready"
	I1212 00:13:32.708409  491960 pod_ready.go:86] duration metric: took 400.601499ms for pod "kube-scheduler-addons-199484" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:13:32.708425  491960 pod_ready.go:40] duration metric: took 1.604693936s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 00:13:32.762174  491960 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1212 00:13:32.765642  491960 out.go:179] * Done! kubectl is now configured to use "addons-199484" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 12 00:15:37 addons-199484 crio[827]: time="2025-12-12T00:15:37.066503302Z" level=info msg="Removed pod sandbox: d1244d908b671d3fc2e84bb829500e94e36b54e5e32099b6d76f8ed02452ea40" id=a464d0ed-01a6-46bc-a905-ce7a6389f7d2 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 12 00:16:30 addons-199484 crio[827]: time="2025-12-12T00:16:30.187696024Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-cjgsd/POD" id=e99fc488-4d7e-4e8f-b89c-7922ca96586a name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 12 00:16:30 addons-199484 crio[827]: time="2025-12-12T00:16:30.187778984Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 00:16:30 addons-199484 crio[827]: time="2025-12-12T00:16:30.212159025Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-cjgsd Namespace:default ID:883551ff8d9c620d2a4be553c4b55a47db1cf81ae48f0e80863f08d16e7e6d2a UID:19ed8b57-9710-4832-b7bc-a5558595ae9f NetNS:/var/run/netns/1f1dcaff-c06b-4476-beca-f7544ea94cb4 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000079088}] Aliases:map[]}"
	Dec 12 00:16:30 addons-199484 crio[827]: time="2025-12-12T00:16:30.212218995Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-cjgsd to CNI network \"kindnet\" (type=ptp)"
	Dec 12 00:16:30 addons-199484 crio[827]: time="2025-12-12T00:16:30.226648994Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-cjgsd Namespace:default ID:883551ff8d9c620d2a4be553c4b55a47db1cf81ae48f0e80863f08d16e7e6d2a UID:19ed8b57-9710-4832-b7bc-a5558595ae9f NetNS:/var/run/netns/1f1dcaff-c06b-4476-beca-f7544ea94cb4 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000079088}] Aliases:map[]}"
	Dec 12 00:16:30 addons-199484 crio[827]: time="2025-12-12T00:16:30.227019529Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-cjgsd for CNI network kindnet (type=ptp)"
	Dec 12 00:16:30 addons-199484 crio[827]: time="2025-12-12T00:16:30.230136523Z" level=info msg="Ran pod sandbox 883551ff8d9c620d2a4be553c4b55a47db1cf81ae48f0e80863f08d16e7e6d2a with infra container: default/hello-world-app-5d498dc89-cjgsd/POD" id=e99fc488-4d7e-4e8f-b89c-7922ca96586a name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 12 00:16:30 addons-199484 crio[827]: time="2025-12-12T00:16:30.231728537Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=3147b4da-2eea-4d0b-bb79-09487b161d6c name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:16:30 addons-199484 crio[827]: time="2025-12-12T00:16:30.231979485Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=3147b4da-2eea-4d0b-bb79-09487b161d6c name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:16:30 addons-199484 crio[827]: time="2025-12-12T00:16:30.232084099Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=3147b4da-2eea-4d0b-bb79-09487b161d6c name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:16:30 addons-199484 crio[827]: time="2025-12-12T00:16:30.235268274Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=28f953ff-ea27-409a-8fc3-0ea0007dc664 name=/runtime.v1.ImageService/PullImage
	Dec 12 00:16:30 addons-199484 crio[827]: time="2025-12-12T00:16:30.2436738Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Dec 12 00:16:30 addons-199484 crio[827]: time="2025-12-12T00:16:30.819217752Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b" id=28f953ff-ea27-409a-8fc3-0ea0007dc664 name=/runtime.v1.ImageService/PullImage
	Dec 12 00:16:30 addons-199484 crio[827]: time="2025-12-12T00:16:30.819981058Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=cf31b3bc-1b79-4ab6-ac54-b720ce66be44 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:16:30 addons-199484 crio[827]: time="2025-12-12T00:16:30.823441519Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=947e0d54-1030-45fa-a1ce-663376e6bd91 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:16:30 addons-199484 crio[827]: time="2025-12-12T00:16:30.831889768Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-cjgsd/hello-world-app" id=c51e43b7-30fd-4d89-9876-5cbf41f20547 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 00:16:30 addons-199484 crio[827]: time="2025-12-12T00:16:30.832310501Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 00:16:30 addons-199484 crio[827]: time="2025-12-12T00:16:30.841925033Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 00:16:30 addons-199484 crio[827]: time="2025-12-12T00:16:30.842262158Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/8192e17ff37c42c1ee057439b1e67312a93f06cfd60b4c3570e415ee3da832a9/merged/etc/passwd: no such file or directory"
	Dec 12 00:16:30 addons-199484 crio[827]: time="2025-12-12T00:16:30.84236202Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/8192e17ff37c42c1ee057439b1e67312a93f06cfd60b4c3570e415ee3da832a9/merged/etc/group: no such file or directory"
	Dec 12 00:16:30 addons-199484 crio[827]: time="2025-12-12T00:16:30.842785223Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 00:16:30 addons-199484 crio[827]: time="2025-12-12T00:16:30.862818077Z" level=info msg="Created container b0ed55a599b08cf894c279262db9fa9fb40e6552c57ba731ca24601b9b654501: default/hello-world-app-5d498dc89-cjgsd/hello-world-app" id=c51e43b7-30fd-4d89-9876-5cbf41f20547 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 00:16:30 addons-199484 crio[827]: time="2025-12-12T00:16:30.866555111Z" level=info msg="Starting container: b0ed55a599b08cf894c279262db9fa9fb40e6552c57ba731ca24601b9b654501" id=9f4c1f62-32f7-4256-acaa-453c003cebb6 name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 00:16:30 addons-199484 crio[827]: time="2025-12-12T00:16:30.871518653Z" level=info msg="Started container" PID=6954 containerID=b0ed55a599b08cf894c279262db9fa9fb40e6552c57ba731ca24601b9b654501 description=default/hello-world-app-5d498dc89-cjgsd/hello-world-app id=9f4c1f62-32f7-4256-acaa-453c003cebb6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=883551ff8d9c620d2a4be553c4b55a47db1cf81ae48f0e80863f08d16e7e6d2a
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	b0ed55a599b08       docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b                                        Less than a second ago   Running             hello-world-app                          0                   883551ff8d9c6       hello-world-app-5d498dc89-cjgsd             default
	e091af8fc8995       public.ecr.aws/nginx/nginx@sha256:2faa7e87b6fbce823070978247970cea2ad90b1936e84eeae1bd2680b03c168d                                           2 minutes ago            Running             nginx                                    0                   e2ba034b1c49b       nginx                                       default
	bffb40f9c1d3c       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          2 minutes ago            Running             busybox                                  0                   54b40a8ef2da4       busybox                                     default
	8cdc6f2f81593       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 3 minutes ago            Running             gcp-auth                                 0                   79ad452890c93       gcp-auth-78565c9fb4-29cjc                   gcp-auth
	0b887da02a72c       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          3 minutes ago            Running             csi-snapshotter                          0                   57b2c259d2d98       csi-hostpathplugin-pwldg                    kube-system
	0104cc6b5dd42       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          3 minutes ago            Running             csi-provisioner                          0                   57b2c259d2d98       csi-hostpathplugin-pwldg                    kube-system
	c47d48a718439       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            3 minutes ago            Running             liveness-probe                           0                   57b2c259d2d98       csi-hostpathplugin-pwldg                    kube-system
	5bff244d59411       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           3 minutes ago            Running             hostpath                                 0                   57b2c259d2d98       csi-hostpathplugin-pwldg                    kube-system
	ef710450ec222       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                3 minutes ago            Running             node-driver-registrar                    0                   57b2c259d2d98       csi-hostpathplugin-pwldg                    kube-system
	58d2c13864228       registry.k8s.io/ingress-nginx/controller@sha256:75494e2145fbebf362d24e24e9285b7fbb7da8783ab272092e3126e24ee4776d                             3 minutes ago            Running             controller                               0                   f523f728238d3       ingress-nginx-controller-85d4c799dd-tm9s4   ingress-nginx
	c609012661b80       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:fadc7bf59b69965b6707edb68022bed4f55a1f99b15f7acd272793e48f171496                            3 minutes ago            Running             gadget                                   0                   ac6a171ca7cd8       gadget-926zk                                gadget
	a10cfd4bcf4a6       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   3 minutes ago            Running             csi-external-health-monitor-controller   0                   57b2c259d2d98       csi-hostpathplugin-pwldg                    kube-system
	7adec1475ef10       nvcr.io/nvidia/k8s-device-plugin@sha256:80924fc52384565a7c59f1e2f12319fb8f2b02a1c974bb3d73a9853fe01af874                                     3 minutes ago            Running             nvidia-device-plugin-ctr                 0                   2d037c44f8a02       nvidia-device-plugin-daemonset-4jhc7        kube-system
	ebe4e7c9ca0fc       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              3 minutes ago            Running             registry-proxy                           0                   1b47233f95609       registry-proxy-rj8pk                        kube-system
	b063d4752b149       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              3 minutes ago            Running             yakd                                     0                   26f1e3f1839a5       yakd-dashboard-5ff678cb9-54gdk              yakd-dashboard
	120bd80c13ba7       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             3 minutes ago            Running             local-path-provisioner                   0                   bdc9f4c8acc57       local-path-provisioner-648f6765c9-jz8vr     local-path-storage
	e42d45fdc95da       e8105550077f5c6c8e92536651451107053f0e41635396ee42aef596441c179a                                                                             3 minutes ago            Exited              patch                                    2                   d38258a9d396a       ingress-nginx-admission-patch-5c76k         ingress-nginx
	c6014c9718ca5       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:c9c1ef89e4bb9d6c9c6c0b5375c3253a0b951e5b731240be20cebe5593de142d                   3 minutes ago            Exited              create                                   0                   758e2f6efb010       ingress-nginx-admission-create-75m2k        ingress-nginx
	aa9c00c762f80       gcr.io/cloud-spanner-emulator/emulator@sha256:daeab9cb1978e02113045625e2633619f465f22aac7638101995f4cd03607170                               3 minutes ago            Running             cloud-spanner-emulator                   0                   ba9216cb127f4       cloud-spanner-emulator-5bdddb765-vnsm7      default
	7859d680677c9       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               3 minutes ago            Running             minikube-ingress-dns                     0                   03fd0053a3ae8       kube-ingress-dns-minikube                   kube-system
	e47e9aabb94d9       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             3 minutes ago            Running             csi-attacher                             0                   6e0210b2e75dd       csi-hostpath-attacher-0                     kube-system
	afc99c45439ad       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        3 minutes ago            Running             metrics-server                           0                   b9ec3cdd46cca       metrics-server-85b7d694d7-mp4fx             kube-system
	e021d26e2771d       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              4 minutes ago            Running             csi-resizer                              0                   9270912a4a449       csi-hostpath-resizer-0                      kube-system
	9da22fcbd3de3       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           4 minutes ago            Running             registry                                 0                   4111dda14b5b1       registry-6b586f9694-d69pq                   kube-system
	12d12a2561d73       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      4 minutes ago            Running             volume-snapshot-controller               0                   2d7596ebe068b       snapshot-controller-7d9fbc56b8-lxshs        kube-system
	549fae89400cf       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      4 minutes ago            Running             volume-snapshot-controller               0                   f31edd0cf94de       snapshot-controller-7d9fbc56b8-p7d72        kube-system
	e4aaf8d36273d       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             4 minutes ago            Running             storage-provisioner                      0                   8364a161101b4       storage-provisioner                         kube-system
	be3ca68362678       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             4 minutes ago            Running             coredns                                  0                   48d20da50be89       coredns-66bc5c9577-jx5nq                    kube-system
	e251865f884a7       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             4 minutes ago            Running             kindnet-cni                              0                   0169747ae697d       kindnet-5nsn6                               kube-system
	f4dd998c607c5       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786                                                                             4 minutes ago            Running             kube-proxy                               0                   7b1a71049e6f4       kube-proxy-67nfx                            kube-system
	10211afe59632       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7                                                                             5 minutes ago            Running             kube-apiserver                           0                   0370f535c60d1       kube-apiserver-addons-199484                kube-system
	7e478b538e97d       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2                                                                             5 minutes ago            Running             kube-controller-manager                  0                   7c607c3408224       kube-controller-manager-addons-199484       kube-system
	810bdb88faff8       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42                                                                             5 minutes ago            Running             etcd                                     0                   3a252a9aeae97       etcd-addons-199484                          kube-system
	8f971c589eb18       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949                                                                             5 minutes ago            Running             kube-scheduler                           0                   218268e9b0239       kube-scheduler-addons-199484                kube-system
	
	
	==> coredns [be3ca683626781d8cf4bacd424bf231f28a131d46b225751ad657dc8a00878f1] <==
	[INFO] 10.244.0.12:34448 - 48660 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002267094s
	[INFO] 10.244.0.12:34448 - 21147 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000159266s
	[INFO] 10.244.0.12:34448 - 42908 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000136802s
	[INFO] 10.244.0.12:35429 - 54993 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000177489s
	[INFO] 10.244.0.12:35429 - 54531 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000100371s
	[INFO] 10.244.0.12:55804 - 14784 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000142126s
	[INFO] 10.244.0.12:55804 - 14348 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000083962s
	[INFO] 10.244.0.12:35948 - 17357 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000110956s
	[INFO] 10.244.0.12:35948 - 16928 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000107755s
	[INFO] 10.244.0.12:48440 - 46169 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.00127048s
	[INFO] 10.244.0.12:48440 - 45956 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001290048s
	[INFO] 10.244.0.12:43487 - 59800 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000116371s
	[INFO] 10.244.0.12:43487 - 59603 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000092305s
	[INFO] 10.244.0.21:51137 - 52159 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000265126s
	[INFO] 10.244.0.21:34101 - 18328 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000380398s
	[INFO] 10.244.0.21:35727 - 58825 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000191463s
	[INFO] 10.244.0.21:59127 - 13889 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000159291s
	[INFO] 10.244.0.21:38882 - 29066 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000149511s
	[INFO] 10.244.0.21:44659 - 23772 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000064491s
	[INFO] 10.244.0.21:51669 - 52108 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002638532s
	[INFO] 10.244.0.21:37979 - 37419 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002927896s
	[INFO] 10.244.0.21:43043 - 18570 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002555235s
	[INFO] 10.244.0.21:43515 - 44500 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.002784227s
	[INFO] 10.244.0.23:59386 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000165535s
	[INFO] 10.244.0.23:53556 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000181157s
	
	
	==> describe nodes <==
	Name:               addons-199484
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-199484
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c04ca15b4c226075dd018d362cd996ac712bf2c0
	                    minikube.k8s.io/name=addons-199484
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_12T00_11_37_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-199484
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-199484"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 12 Dec 2025 00:11:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-199484
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 12 Dec 2025 00:16:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 12 Dec 2025 00:16:12 +0000   Fri, 12 Dec 2025 00:11:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 12 Dec 2025 00:16:12 +0000   Fri, 12 Dec 2025 00:11:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 12 Dec 2025 00:16:12 +0000   Fri, 12 Dec 2025 00:11:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 12 Dec 2025 00:16:12 +0000   Fri, 12 Dec 2025 00:12:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-199484
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 78f85184c267cd52312ad0096937f858
	  System UUID:                9e7dfbbd-1a6e-487c-a48c-31ada1830da5
	  Boot ID:                    cbbb78f6-c2df-4b23-9269-8d5d442bffaa
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m58s
	  default                     cloud-spanner-emulator-5bdddb765-vnsm7       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m45s
	  default                     hello-world-app-5d498dc89-cjgsd              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m21s
	  gadget                      gadget-926zk                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m44s
	  gcp-auth                    gcp-auth-78565c9fb4-29cjc                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m39s
	  ingress-nginx               ingress-nginx-controller-85d4c799dd-tm9s4    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         4m43s
	  kube-system                 coredns-66bc5c9577-jx5nq                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     4m49s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m42s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m42s
	  kube-system                 csi-hostpathplugin-pwldg                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m8s
	  kube-system                 etcd-addons-199484                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         4m54s
	  kube-system                 kindnet-5nsn6                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      4m50s
	  kube-system                 kube-apiserver-addons-199484                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m54s
	  kube-system                 kube-controller-manager-addons-199484        200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m54s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m44s
	  kube-system                 kube-proxy-67nfx                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m50s
	  kube-system                 kube-scheduler-addons-199484                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m56s
	  kube-system                 metrics-server-85b7d694d7-mp4fx              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         4m44s
	  kube-system                 nvidia-device-plugin-daemonset-4jhc7         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m8s
	  kube-system                 registry-6b586f9694-d69pq                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m45s
	  kube-system                 registry-creds-764b6fb674-mf9j5              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m47s
	  kube-system                 registry-proxy-rj8pk                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m8s
	  kube-system                 snapshot-controller-7d9fbc56b8-lxshs         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m43s
	  kube-system                 snapshot-controller-7d9fbc56b8-p7d72         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m43s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m45s
	  local-path-storage          local-path-provisioner-648f6765c9-jz8vr      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m43s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-54gdk               0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     4m43s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 4m49s                kube-proxy       
	  Warning  CgroupV1                 5m2s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m2s (x8 over 5m2s)  kubelet          Node addons-199484 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m2s (x8 over 5m2s)  kubelet          Node addons-199484 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m2s (x8 over 5m2s)  kubelet          Node addons-199484 status is now: NodeHasSufficientPID
	  Normal   Starting                 4m55s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 4m55s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  4m54s                kubelet          Node addons-199484 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m54s                kubelet          Node addons-199484 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m54s                kubelet          Node addons-199484 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4m50s                node-controller  Node addons-199484 event: Registered Node addons-199484 in Controller
	  Normal   NodeReady                4m8s                 kubelet          Node addons-199484 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec11 23:45] hrtimer: interrupt took 13740716 ns
	[Dec12 00:10] kauditd_printk_skb: 8 callbacks suppressed
	[Dec12 00:11] overlayfs: idmapped layers are currently not supported
	[  +0.124336] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [810bdb88faff8bb6f2eca85e10545aa7edde43a7452f29a88bc8f3d2c032b8df] <==
	{"level":"warn","ts":"2025-12-12T00:11:32.449168Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43622","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:11:32.470990Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:11:32.531758Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:11:32.561321Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:11:32.598536Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:11:32.622486Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:11:32.663599Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:11:32.682782Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43736","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:11:32.711568Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:11:32.771454Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:11:32.787007Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:11:32.818700Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:11:32.849964Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:11:32.872149Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:11:32.913586Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:11:32.949657Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:11:32.982547Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:11:33.006907Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:11:33.197473Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:11:49.619554Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:11:49.623175Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:12:11.127795Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:12:11.144333Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:12:11.191870Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:12:11.226430Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37268","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [8cdc6f2f81593a6954c230601978343429395ab486dc6f876c12478dc8cfe38d] <==
	2025/12/12 00:13:30 GCP Auth Webhook started!
	2025/12/12 00:13:33 Ready to marshal response ...
	2025/12/12 00:13:33 Ready to write response ...
	2025/12/12 00:13:33 Ready to marshal response ...
	2025/12/12 00:13:33 Ready to write response ...
	2025/12/12 00:13:33 Ready to marshal response ...
	2025/12/12 00:13:33 Ready to write response ...
	2025/12/12 00:13:54 Ready to marshal response ...
	2025/12/12 00:13:54 Ready to write response ...
	2025/12/12 00:13:59 Ready to marshal response ...
	2025/12/12 00:13:59 Ready to write response ...
	2025/12/12 00:13:59 Ready to marshal response ...
	2025/12/12 00:13:59 Ready to write response ...
	2025/12/12 00:14:08 Ready to marshal response ...
	2025/12/12 00:14:08 Ready to write response ...
	2025/12/12 00:14:10 Ready to marshal response ...
	2025/12/12 00:14:10 Ready to write response ...
	2025/12/12 00:14:13 Ready to marshal response ...
	2025/12/12 00:14:13 Ready to write response ...
	2025/12/12 00:14:34 Ready to marshal response ...
	2025/12/12 00:14:34 Ready to write response ...
	2025/12/12 00:16:29 Ready to marshal response ...
	2025/12/12 00:16:29 Ready to write response ...
	
	
	==> kernel <==
	 00:16:31 up  2:58,  0 user,  load average: 0.39, 1.37, 1.68
	Linux addons-199484 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e251865f884a70fae76b65618090dc9e6abcf3315601089443dc5fb1bd026fb1] <==
	I1212 00:14:22.712865       1 main.go:301] handling current node
	I1212 00:14:32.718828       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:14:32.718860       1 main.go:301] handling current node
	I1212 00:14:42.711944       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:14:42.712057       1 main.go:301] handling current node
	I1212 00:14:52.712993       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:14:52.713025       1 main.go:301] handling current node
	I1212 00:15:02.718874       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:15:02.718914       1 main.go:301] handling current node
	I1212 00:15:12.712306       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:15:12.712339       1 main.go:301] handling current node
	I1212 00:15:22.716556       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:15:22.716588       1 main.go:301] handling current node
	I1212 00:15:32.718766       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:15:32.718797       1 main.go:301] handling current node
	I1212 00:15:42.718915       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:15:42.719024       1 main.go:301] handling current node
	I1212 00:15:52.716686       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:15:52.716805       1 main.go:301] handling current node
	I1212 00:16:02.712303       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:16:02.712338       1 main.go:301] handling current node
	I1212 00:16:12.712195       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:16:12.712228       1 main.go:301] handling current node
	I1212 00:16:22.717611       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:16:22.717716       1 main.go:301] handling current node
	
	
	==> kube-apiserver [10211afe59632799435b4008dd96430e1edb4a1cc399809c32273577dfd7cd61] <==
	I1212 00:11:52.441587       1 alloc.go:328] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.107.252.137"}
	W1212 00:12:11.127749       1 logging.go:55] [core] [Channel #270 SubChannel #271]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1212 00:12:11.144344       1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1212 00:12:11.190801       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1212 00:12:11.216191       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1212 00:12:23.248819       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.107.252.137:443: connect: connection refused
	E1212 00:12:23.248867       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.107.252.137:443: connect: connection refused" logger="UnhandledError"
	W1212 00:12:23.249882       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.107.252.137:443: connect: connection refused
	E1212 00:12:23.249972       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.107.252.137:443: connect: connection refused" logger="UnhandledError"
	W1212 00:12:23.371517       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.107.252.137:443: connect: connection refused
	E1212 00:12:23.371683       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.107.252.137:443: connect: connection refused" logger="UnhandledError"
	E1212 00:12:46.459346       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.90.105:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.110.90.105:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.110.90.105:443: connect: connection refused" logger="UnhandledError"
	W1212 00:12:46.459531       1 handler_proxy.go:99] no RequestInfo found in the context
	E1212 00:12:46.459584       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1212 00:12:46.546636       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1212 00:13:42.761098       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:58638: use of closed network connection
	E1212 00:13:43.046633       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:58670: use of closed network connection
	E1212 00:13:43.190024       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:58680: use of closed network connection
	I1212 00:14:10.498316       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1212 00:14:10.820178       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.110.38.63"}
	I1212 00:14:19.497344       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1212 00:16:30.111829       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.104.31.100"}
	
	
	==> kube-controller-manager [7e478b538e97db66e0de68ed3ade2ff6d3d2420a89b4bad65e8158d500e16aae] <==
	I1212 00:11:41.112252       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1212 00:11:41.113046       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1212 00:11:41.113162       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1212 00:11:41.116185       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1212 00:11:41.120801       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1212 00:11:41.129059       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1212 00:11:41.131209       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1212 00:11:41.133456       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1212 00:11:41.143039       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1212 00:11:41.144110       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1212 00:11:41.147445       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1212 00:11:41.149707       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1212 00:11:41.154966       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1212 00:11:47.469119       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1212 00:11:47.495077       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1212 00:12:11.121132       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1212 00:12:11.121289       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1212 00:12:11.121342       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1212 00:12:11.173463       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1212 00:12:11.182443       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1212 00:12:11.222802       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1212 00:12:11.282947       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1212 00:12:26.104244       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1212 00:12:41.228259       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1212 00:12:41.291354       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [f4dd998c607c5f8351f4c10ea768def06e8e2defafffafca5fe3876d98d9b123] <==
	I1212 00:11:42.421948       1 server_linux.go:53] "Using iptables proxy"
	I1212 00:11:42.546170       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1212 00:11:42.646270       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1212 00:11:42.646303       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1212 00:11:42.646377       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1212 00:11:42.723153       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1212 00:11:42.723287       1 server_linux.go:132] "Using iptables Proxier"
	I1212 00:11:42.819490       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1212 00:11:42.821280       1 server.go:527] "Version info" version="v1.34.2"
	I1212 00:11:42.825284       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 00:11:42.855018       1 config.go:106] "Starting endpoint slice config controller"
	I1212 00:11:42.855047       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1212 00:11:42.855465       1 config.go:200] "Starting service config controller"
	I1212 00:11:42.855472       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1212 00:11:42.855808       1 config.go:403] "Starting serviceCIDR config controller"
	I1212 00:11:42.855815       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1212 00:11:42.856229       1 config.go:309] "Starting node config controller"
	I1212 00:11:42.856235       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1212 00:11:42.856241       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1212 00:11:42.961256       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1212 00:11:42.961293       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1212 00:11:42.961310       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [8f971c589eb18130d181fe2c7aa31da3304b9d3a3c2f5c74aa810a8426636a2a] <==
	I1212 00:11:35.024356       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 00:11:35.027385       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1212 00:11:35.027532       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1212 00:11:35.027556       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1212 00:11:35.027574       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1212 00:11:35.032029       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1212 00:11:35.038575       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1212 00:11:35.040154       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1212 00:11:35.040309       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1212 00:11:35.040565       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1212 00:11:35.040693       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1212 00:11:35.040934       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1212 00:11:35.041055       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1212 00:11:35.041196       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1212 00:11:35.041308       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1212 00:11:35.041443       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1212 00:11:35.041542       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1212 00:11:35.041643       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1212 00:11:35.041752       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1212 00:11:35.041879       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1212 00:11:35.042004       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1212 00:11:35.042250       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1212 00:11:35.042454       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1212 00:11:35.042522       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	I1212 00:11:36.627910       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 12 00:14:36 addons-199484 kubelet[1285]: I1212 00:14:36.970722    1285 scope.go:117] "RemoveContainer" containerID="3ff28337f5829d1b1a888468942724848ccb5b4a42dd7e223b38b02534036726"
	Dec 12 00:14:36 addons-199484 kubelet[1285]: E1212 00:14:36.977427    1285 manager.go:1116] Failed to create existing container: /crio/crio-7c603b4b58fe535008f31839a4f01b4493ec173b63011a0dd23e0ce5563a41e9: Error finding container 7c603b4b58fe535008f31839a4f01b4493ec173b63011a0dd23e0ce5563a41e9: Status 404 returned error can't find the container with id 7c603b4b58fe535008f31839a4f01b4493ec173b63011a0dd23e0ce5563a41e9
	Dec 12 00:14:37 addons-199484 kubelet[1285]: E1212 00:14:37.025679    1285 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/4030653cb04c0be9e690ae77c4fee765d07397936b1926e80709ef81f24ae54b/diff" to get inode usage: stat /var/lib/containers/storage/overlay/4030653cb04c0be9e690ae77c4fee765d07397936b1926e80709ef81f24ae54b/diff: no such file or directory, extraDiskErr: <nil>
	Dec 12 00:14:40 addons-199484 kubelet[1285]: I1212 00:14:40.727636    1285 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/task-pv-pod-restore" podStartSLOduration=6.727421624 podStartE2EDuration="6.727421624s" podCreationTimestamp="2025-12-12 00:14:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 00:14:35.100645773 +0000 UTC m=+178.422325541" watchObservedRunningTime="2025-12-12 00:14:40.727421624 +0000 UTC m=+184.049101400"
	Dec 12 00:14:40 addons-199484 kubelet[1285]: I1212 00:14:40.970385    1285 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-62l6m\" (UniqueName: \"kubernetes.io/projected/563b884f-5753-4f8f-bd5b-1a1b9e2d517f-kube-api-access-62l6m\") pod \"563b884f-5753-4f8f-bd5b-1a1b9e2d517f\" (UID: \"563b884f-5753-4f8f-bd5b-1a1b9e2d517f\") "
	Dec 12 00:14:40 addons-199484 kubelet[1285]: I1212 00:14:40.970558    1285 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"task-pv-storage\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^89834e91-d6ef-11f0-b0cf-ce93cbb89d9a\") pod \"563b884f-5753-4f8f-bd5b-1a1b9e2d517f\" (UID: \"563b884f-5753-4f8f-bd5b-1a1b9e2d517f\") "
	Dec 12 00:14:40 addons-199484 kubelet[1285]: I1212 00:14:40.970619    1285 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/563b884f-5753-4f8f-bd5b-1a1b9e2d517f-gcp-creds\") pod \"563b884f-5753-4f8f-bd5b-1a1b9e2d517f\" (UID: \"563b884f-5753-4f8f-bd5b-1a1b9e2d517f\") "
	Dec 12 00:14:40 addons-199484 kubelet[1285]: I1212 00:14:40.971179    1285 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/563b884f-5753-4f8f-bd5b-1a1b9e2d517f-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "563b884f-5753-4f8f-bd5b-1a1b9e2d517f" (UID: "563b884f-5753-4f8f-bd5b-1a1b9e2d517f"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Dec 12 00:14:40 addons-199484 kubelet[1285]: I1212 00:14:40.974894    1285 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/563b884f-5753-4f8f-bd5b-1a1b9e2d517f-kube-api-access-62l6m" (OuterVolumeSpecName: "kube-api-access-62l6m") pod "563b884f-5753-4f8f-bd5b-1a1b9e2d517f" (UID: "563b884f-5753-4f8f-bd5b-1a1b9e2d517f"). InnerVolumeSpecName "kube-api-access-62l6m". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 12 00:14:40 addons-199484 kubelet[1285]: I1212 00:14:40.975285    1285 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/hostpath.csi.k8s.io^89834e91-d6ef-11f0-b0cf-ce93cbb89d9a" (OuterVolumeSpecName: "task-pv-storage") pod "563b884f-5753-4f8f-bd5b-1a1b9e2d517f" (UID: "563b884f-5753-4f8f-bd5b-1a1b9e2d517f"). InnerVolumeSpecName "pvc-22c8d9cb-40e3-4791-96ff-c416968e7fdd". PluginName "kubernetes.io/csi", VolumeGIDValue ""
	Dec 12 00:14:41 addons-199484 kubelet[1285]: I1212 00:14:41.071807    1285 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-22c8d9cb-40e3-4791-96ff-c416968e7fdd\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^89834e91-d6ef-11f0-b0cf-ce93cbb89d9a\") on node \"addons-199484\" "
	Dec 12 00:14:41 addons-199484 kubelet[1285]: I1212 00:14:41.071844    1285 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/563b884f-5753-4f8f-bd5b-1a1b9e2d517f-gcp-creds\") on node \"addons-199484\" DevicePath \"\""
	Dec 12 00:14:41 addons-199484 kubelet[1285]: I1212 00:14:41.071859    1285 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-62l6m\" (UniqueName: \"kubernetes.io/projected/563b884f-5753-4f8f-bd5b-1a1b9e2d517f-kube-api-access-62l6m\") on node \"addons-199484\" DevicePath \"\""
	Dec 12 00:14:41 addons-199484 kubelet[1285]: I1212 00:14:41.076570    1285 operation_generator.go:895] UnmountDevice succeeded for volume "pvc-22c8d9cb-40e3-4791-96ff-c416968e7fdd" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^89834e91-d6ef-11f0-b0cf-ce93cbb89d9a") on node "addons-199484"
	Dec 12 00:14:41 addons-199484 kubelet[1285]: I1212 00:14:41.114029    1285 scope.go:117] "RemoveContainer" containerID="c864883bc306ce8c07c81f5786ec4bd3186eebd9dd077cd7aea2af915ae86c63"
	Dec 12 00:14:41 addons-199484 kubelet[1285]: I1212 00:14:41.124886    1285 scope.go:117] "RemoveContainer" containerID="c864883bc306ce8c07c81f5786ec4bd3186eebd9dd077cd7aea2af915ae86c63"
	Dec 12 00:14:41 addons-199484 kubelet[1285]: E1212 00:14:41.126717    1285 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c864883bc306ce8c07c81f5786ec4bd3186eebd9dd077cd7aea2af915ae86c63\": container with ID starting with c864883bc306ce8c07c81f5786ec4bd3186eebd9dd077cd7aea2af915ae86c63 not found: ID does not exist" containerID="c864883bc306ce8c07c81f5786ec4bd3186eebd9dd077cd7aea2af915ae86c63"
	Dec 12 00:14:41 addons-199484 kubelet[1285]: I1212 00:14:41.126897    1285 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c864883bc306ce8c07c81f5786ec4bd3186eebd9dd077cd7aea2af915ae86c63"} err="failed to get container status \"c864883bc306ce8c07c81f5786ec4bd3186eebd9dd077cd7aea2af915ae86c63\": rpc error: code = NotFound desc = could not find container \"c864883bc306ce8c07c81f5786ec4bd3186eebd9dd077cd7aea2af915ae86c63\": container with ID starting with c864883bc306ce8c07c81f5786ec4bd3186eebd9dd077cd7aea2af915ae86c63 not found: ID does not exist"
	Dec 12 00:14:41 addons-199484 kubelet[1285]: I1212 00:14:41.174545    1285 reconciler_common.go:299] "Volume detached for volume \"pvc-22c8d9cb-40e3-4791-96ff-c416968e7fdd\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^89834e91-d6ef-11f0-b0cf-ce93cbb89d9a\") on node \"addons-199484\" DevicePath \"\""
	Dec 12 00:14:42 addons-199484 kubelet[1285]: I1212 00:14:42.815578    1285 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="563b884f-5753-4f8f-bd5b-1a1b9e2d517f" path="/var/lib/kubelet/pods/563b884f-5753-4f8f-bd5b-1a1b9e2d517f/volumes"
	Dec 12 00:15:18 addons-199484 kubelet[1285]: I1212 00:15:18.813787    1285 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-d69pq" secret="" err="secret \"gcp-auth\" not found"
	Dec 12 00:15:28 addons-199484 kubelet[1285]: I1212 00:15:28.812740    1285 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-rj8pk" secret="" err="secret \"gcp-auth\" not found"
	Dec 12 00:15:43 addons-199484 kubelet[1285]: I1212 00:15:43.812396    1285 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-4jhc7" secret="" err="secret \"gcp-auth\" not found"
	Dec 12 00:16:29 addons-199484 kubelet[1285]: I1212 00:16:29.898637    1285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxjxt\" (UniqueName: \"kubernetes.io/projected/19ed8b57-9710-4832-b7bc-a5558595ae9f-kube-api-access-qxjxt\") pod \"hello-world-app-5d498dc89-cjgsd\" (UID: \"19ed8b57-9710-4832-b7bc-a5558595ae9f\") " pod="default/hello-world-app-5d498dc89-cjgsd"
	Dec 12 00:16:29 addons-199484 kubelet[1285]: I1212 00:16:29.898717    1285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/19ed8b57-9710-4832-b7bc-a5558595ae9f-gcp-creds\") pod \"hello-world-app-5d498dc89-cjgsd\" (UID: \"19ed8b57-9710-4832-b7bc-a5558595ae9f\") " pod="default/hello-world-app-5d498dc89-cjgsd"
	
	
	==> storage-provisioner [e4aaf8d36273d3acde491a8cd14406fcfbfeebc26d9e26894b4170e98f011d9a] <==
	W1212 00:16:07.800059       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:16:09.802896       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:16:09.809730       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:16:11.812391       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:16:11.817036       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:16:13.820518       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:16:13.827518       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:16:15.831413       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:16:15.835724       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:16:17.838372       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:16:17.843133       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:16:19.845661       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:16:19.849828       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:16:21.852416       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:16:21.856555       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:16:23.860163       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:16:23.864913       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:16:25.869873       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:16:25.878596       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:16:27.882039       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:16:27.886645       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:16:29.917875       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:16:29.929919       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:16:31.934168       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:16:31.939095       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-199484 -n addons-199484
helpers_test.go:270: (dbg) Run:  kubectl --context addons-199484 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: ingress-nginx-admission-create-75m2k ingress-nginx-admission-patch-5c76k registry-creds-764b6fb674-mf9j5
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context addons-199484 describe pod ingress-nginx-admission-create-75m2k ingress-nginx-admission-patch-5c76k registry-creds-764b6fb674-mf9j5
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-199484 describe pod ingress-nginx-admission-create-75m2k ingress-nginx-admission-patch-5c76k registry-creds-764b6fb674-mf9j5: exit status 1 (91.375483ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-75m2k" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-5c76k" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-mf9j5" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context addons-199484 describe pod ingress-nginx-admission-create-75m2k ingress-nginx-admission-patch-5c76k registry-creds-764b6fb674-mf9j5: exit status 1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-199484 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-199484 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (265.605702ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 00:16:33.112260  501288 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:16:33.113059  501288 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:16:33.113103  501288 out.go:374] Setting ErrFile to fd 2...
	I1212 00:16:33.113123  501288 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:16:33.113438  501288 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-487723/.minikube/bin
	I1212 00:16:33.113804  501288 mustload.go:66] Loading cluster: addons-199484
	I1212 00:16:33.114241  501288 config.go:182] Loaded profile config "addons-199484": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:16:33.114280  501288 addons.go:622] checking whether the cluster is paused
	I1212 00:16:33.114433  501288 config.go:182] Loaded profile config "addons-199484": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:16:33.114466  501288 host.go:66] Checking if "addons-199484" exists ...
	I1212 00:16:33.115181  501288 cli_runner.go:164] Run: docker container inspect addons-199484 --format={{.State.Status}}
	I1212 00:16:33.134122  501288 ssh_runner.go:195] Run: systemctl --version
	I1212 00:16:33.134178  501288 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199484
	I1212 00:16:33.151650  501288 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/addons-199484/id_rsa Username:docker}
	I1212 00:16:33.257872  501288 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 00:16:33.257963  501288 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 00:16:33.292286  501288 cri.go:89] found id: "0b887da02a72ca5e9db931fa0b78ed0c10baf6d179621e8b3b144a117a55809d"
	I1212 00:16:33.292340  501288 cri.go:89] found id: "0104cc6b5dd424933a9226ae25ad89956edefad068d133eb18cfe430e71b64ac"
	I1212 00:16:33.292347  501288 cri.go:89] found id: "c47d48a718439532bd951ca0cdfae6827283df3ae31939423aeb7181a556e753"
	I1212 00:16:33.292351  501288 cri.go:89] found id: "5bff244d59411c9a273c0a7b1039b628bff021a4fcc767ce6393c52e86beb8eb"
	I1212 00:16:33.292354  501288 cri.go:89] found id: "ef710450ec222b3bb9dce827f650fe3b3d671f8886111935c0833ebaf845350b"
	I1212 00:16:33.292358  501288 cri.go:89] found id: "a10cfd4bcf4a619b3dd7e7bec66661fc97cca6bb81d6d42138f98dd802da82b5"
	I1212 00:16:33.292361  501288 cri.go:89] found id: "7adec1475ef10259911007f4aed32a65a52312ba2c8d26c991fc9f115e2afc7e"
	I1212 00:16:33.292365  501288 cri.go:89] found id: "ebe4e7c9ca0fc77879706813f9313a141bda416c1327e75bfc10b883dde9afe7"
	I1212 00:16:33.292368  501288 cri.go:89] found id: "7859d680677c9320a6b97dc99f20c809caf1cf0a0e02a9680dd377acc63b6976"
	I1212 00:16:33.292375  501288 cri.go:89] found id: "e47e9aabb94d9f6577691b06bc3594ad26b704954d37c8f5750e1b8ae813479b"
	I1212 00:16:33.292379  501288 cri.go:89] found id: "afc99c45439ad0e7765f5c4c99793d9a7abbbc0fbe4ee7679500e1d0d406f9cc"
	I1212 00:16:33.292382  501288 cri.go:89] found id: "e021d26e2771d4b836e3c2ff9a3c8340d8a7f191d6258abe1cbcbd9602298f76"
	I1212 00:16:33.292385  501288 cri.go:89] found id: "9da22fcbd3de39035ac1a03e1a791e6c51948745f63a0cd5880aee271a0b93c4"
	I1212 00:16:33.292388  501288 cri.go:89] found id: "12d12a2561d73ca125841a672690c99e84b5fb54f11b8b04257c2e1ab7f1a247"
	I1212 00:16:33.292392  501288 cri.go:89] found id: "549fae89400cf4efbe803ae4aa702097163dfc5e9a131def0ae6bbecb4c0601e"
	I1212 00:16:33.292397  501288 cri.go:89] found id: "e4aaf8d36273d3acde491a8cd14406fcfbfeebc26d9e26894b4170e98f011d9a"
	I1212 00:16:33.292413  501288 cri.go:89] found id: "be3ca683626781d8cf4bacd424bf231f28a131d46b225751ad657dc8a00878f1"
	I1212 00:16:33.292417  501288 cri.go:89] found id: "e251865f884a70fae76b65618090dc9e6abcf3315601089443dc5fb1bd026fb1"
	I1212 00:16:33.292420  501288 cri.go:89] found id: "f4dd998c607c5f8351f4c10ea768def06e8e2defafffafca5fe3876d98d9b123"
	I1212 00:16:33.292423  501288 cri.go:89] found id: "10211afe59632799435b4008dd96430e1edb4a1cc399809c32273577dfd7cd61"
	I1212 00:16:33.292428  501288 cri.go:89] found id: "7e478b538e97db66e0de68ed3ade2ff6d3d2420a89b4bad65e8158d500e16aae"
	I1212 00:16:33.292431  501288 cri.go:89] found id: "810bdb88faff8bb6f2eca85e10545aa7edde43a7452f29a88bc8f3d2c032b8df"
	I1212 00:16:33.292434  501288 cri.go:89] found id: "8f971c589eb18130d181fe2c7aa31da3304b9d3a3c2f5c74aa810a8426636a2a"
	I1212 00:16:33.292438  501288 cri.go:89] found id: ""
	I1212 00:16:33.292498  501288 ssh_runner.go:195] Run: sudo runc list -f json
	I1212 00:16:33.308098  501288 out.go:203] 
	W1212 00:16:33.311027  501288 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T00:16:33Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T00:16:33Z" level=error msg="open /run/runc: no such file or directory"
	
	W1212 00:16:33.311065  501288 out.go:285] * 
	* 
	W1212 00:16:33.317892  501288 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 00:16:33.320895  501288 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable ingress-dns addon: args "out/minikube-linux-arm64 -p addons-199484 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-199484 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-199484 addons disable ingress --alsologtostderr -v=1: exit status 11 (321.258807ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 00:16:33.396123  501388 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:16:33.397129  501388 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:16:33.397157  501388 out.go:374] Setting ErrFile to fd 2...
	I1212 00:16:33.397164  501388 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:16:33.397488  501388 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-487723/.minikube/bin
	I1212 00:16:33.397916  501388 mustload.go:66] Loading cluster: addons-199484
	I1212 00:16:33.398358  501388 config.go:182] Loaded profile config "addons-199484": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:16:33.398378  501388 addons.go:622] checking whether the cluster is paused
	I1212 00:16:33.398609  501388 config.go:182] Loaded profile config "addons-199484": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:16:33.398628  501388 host.go:66] Checking if "addons-199484" exists ...
	I1212 00:16:33.399332  501388 cli_runner.go:164] Run: docker container inspect addons-199484 --format={{.State.Status}}
	I1212 00:16:33.418174  501388 ssh_runner.go:195] Run: systemctl --version
	I1212 00:16:33.418232  501388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199484
	I1212 00:16:33.450415  501388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/addons-199484/id_rsa Username:docker}
	I1212 00:16:33.573509  501388 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 00:16:33.573602  501388 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 00:16:33.610455  501388 cri.go:89] found id: "0b887da02a72ca5e9db931fa0b78ed0c10baf6d179621e8b3b144a117a55809d"
	I1212 00:16:33.610484  501388 cri.go:89] found id: "0104cc6b5dd424933a9226ae25ad89956edefad068d133eb18cfe430e71b64ac"
	I1212 00:16:33.610490  501388 cri.go:89] found id: "c47d48a718439532bd951ca0cdfae6827283df3ae31939423aeb7181a556e753"
	I1212 00:16:33.610494  501388 cri.go:89] found id: "5bff244d59411c9a273c0a7b1039b628bff021a4fcc767ce6393c52e86beb8eb"
	I1212 00:16:33.610497  501388 cri.go:89] found id: "ef710450ec222b3bb9dce827f650fe3b3d671f8886111935c0833ebaf845350b"
	I1212 00:16:33.610501  501388 cri.go:89] found id: "a10cfd4bcf4a619b3dd7e7bec66661fc97cca6bb81d6d42138f98dd802da82b5"
	I1212 00:16:33.610504  501388 cri.go:89] found id: "7adec1475ef10259911007f4aed32a65a52312ba2c8d26c991fc9f115e2afc7e"
	I1212 00:16:33.610508  501388 cri.go:89] found id: "ebe4e7c9ca0fc77879706813f9313a141bda416c1327e75bfc10b883dde9afe7"
	I1212 00:16:33.610511  501388 cri.go:89] found id: "7859d680677c9320a6b97dc99f20c809caf1cf0a0e02a9680dd377acc63b6976"
	I1212 00:16:33.610519  501388 cri.go:89] found id: "e47e9aabb94d9f6577691b06bc3594ad26b704954d37c8f5750e1b8ae813479b"
	I1212 00:16:33.610522  501388 cri.go:89] found id: "afc99c45439ad0e7765f5c4c99793d9a7abbbc0fbe4ee7679500e1d0d406f9cc"
	I1212 00:16:33.610525  501388 cri.go:89] found id: "e021d26e2771d4b836e3c2ff9a3c8340d8a7f191d6258abe1cbcbd9602298f76"
	I1212 00:16:33.610529  501388 cri.go:89] found id: "9da22fcbd3de39035ac1a03e1a791e6c51948745f63a0cd5880aee271a0b93c4"
	I1212 00:16:33.610532  501388 cri.go:89] found id: "12d12a2561d73ca125841a672690c99e84b5fb54f11b8b04257c2e1ab7f1a247"
	I1212 00:16:33.610540  501388 cri.go:89] found id: "549fae89400cf4efbe803ae4aa702097163dfc5e9a131def0ae6bbecb4c0601e"
	I1212 00:16:33.610551  501388 cri.go:89] found id: "e4aaf8d36273d3acde491a8cd14406fcfbfeebc26d9e26894b4170e98f011d9a"
	I1212 00:16:33.610561  501388 cri.go:89] found id: "be3ca683626781d8cf4bacd424bf231f28a131d46b225751ad657dc8a00878f1"
	I1212 00:16:33.610565  501388 cri.go:89] found id: "e251865f884a70fae76b65618090dc9e6abcf3315601089443dc5fb1bd026fb1"
	I1212 00:16:33.610569  501388 cri.go:89] found id: "f4dd998c607c5f8351f4c10ea768def06e8e2defafffafca5fe3876d98d9b123"
	I1212 00:16:33.610572  501388 cri.go:89] found id: "10211afe59632799435b4008dd96430e1edb4a1cc399809c32273577dfd7cd61"
	I1212 00:16:33.610575  501388 cri.go:89] found id: "7e478b538e97db66e0de68ed3ade2ff6d3d2420a89b4bad65e8158d500e16aae"
	I1212 00:16:33.610578  501388 cri.go:89] found id: "810bdb88faff8bb6f2eca85e10545aa7edde43a7452f29a88bc8f3d2c032b8df"
	I1212 00:16:33.610582  501388 cri.go:89] found id: "8f971c589eb18130d181fe2c7aa31da3304b9d3a3c2f5c74aa810a8426636a2a"
	I1212 00:16:33.610594  501388 cri.go:89] found id: ""
	I1212 00:16:33.610653  501388 ssh_runner.go:195] Run: sudo runc list -f json
	I1212 00:16:33.627493  501388 out.go:203] 
	W1212 00:16:33.630537  501388 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T00:16:33Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T00:16:33Z" level=error msg="open /run/runc: no such file or directory"
	
	W1212 00:16:33.630572  501388 out.go:285] * 
	* 
	W1212 00:16:33.637060  501388 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 00:16:33.640014  501388 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable ingress addon: args "out/minikube-linux-arm64 -p addons-199484 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (143.45s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.26s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-926zk" [2503ff20-073b-44ff-9033-c5f84490abbd] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.00329488s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-199484 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-199484 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (254.799932ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 00:14:48.156108  500260 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:14:48.156942  500260 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:14:48.156959  500260 out.go:374] Setting ErrFile to fd 2...
	I1212 00:14:48.156965  500260 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:14:48.157214  500260 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-487723/.minikube/bin
	I1212 00:14:48.157516  500260 mustload.go:66] Loading cluster: addons-199484
	I1212 00:14:48.157904  500260 config.go:182] Loaded profile config "addons-199484": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:14:48.157922  500260 addons.go:622] checking whether the cluster is paused
	I1212 00:14:48.158028  500260 config.go:182] Loaded profile config "addons-199484": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:14:48.158044  500260 host.go:66] Checking if "addons-199484" exists ...
	I1212 00:14:48.158532  500260 cli_runner.go:164] Run: docker container inspect addons-199484 --format={{.State.Status}}
	I1212 00:14:48.177897  500260 ssh_runner.go:195] Run: systemctl --version
	I1212 00:14:48.177985  500260 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199484
	I1212 00:14:48.197089  500260 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/addons-199484/id_rsa Username:docker}
	I1212 00:14:48.301278  500260 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 00:14:48.301371  500260 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 00:14:48.331703  500260 cri.go:89] found id: "0b887da02a72ca5e9db931fa0b78ed0c10baf6d179621e8b3b144a117a55809d"
	I1212 00:14:48.331781  500260 cri.go:89] found id: "0104cc6b5dd424933a9226ae25ad89956edefad068d133eb18cfe430e71b64ac"
	I1212 00:14:48.331793  500260 cri.go:89] found id: "c47d48a718439532bd951ca0cdfae6827283df3ae31939423aeb7181a556e753"
	I1212 00:14:48.331799  500260 cri.go:89] found id: "5bff244d59411c9a273c0a7b1039b628bff021a4fcc767ce6393c52e86beb8eb"
	I1212 00:14:48.331802  500260 cri.go:89] found id: "ef710450ec222b3bb9dce827f650fe3b3d671f8886111935c0833ebaf845350b"
	I1212 00:14:48.331806  500260 cri.go:89] found id: "a10cfd4bcf4a619b3dd7e7bec66661fc97cca6bb81d6d42138f98dd802da82b5"
	I1212 00:14:48.331809  500260 cri.go:89] found id: "7adec1475ef10259911007f4aed32a65a52312ba2c8d26c991fc9f115e2afc7e"
	I1212 00:14:48.331813  500260 cri.go:89] found id: "ebe4e7c9ca0fc77879706813f9313a141bda416c1327e75bfc10b883dde9afe7"
	I1212 00:14:48.331816  500260 cri.go:89] found id: "7859d680677c9320a6b97dc99f20c809caf1cf0a0e02a9680dd377acc63b6976"
	I1212 00:14:48.331822  500260 cri.go:89] found id: "e47e9aabb94d9f6577691b06bc3594ad26b704954d37c8f5750e1b8ae813479b"
	I1212 00:14:48.331828  500260 cri.go:89] found id: "afc99c45439ad0e7765f5c4c99793d9a7abbbc0fbe4ee7679500e1d0d406f9cc"
	I1212 00:14:48.331832  500260 cri.go:89] found id: "e021d26e2771d4b836e3c2ff9a3c8340d8a7f191d6258abe1cbcbd9602298f76"
	I1212 00:14:48.331835  500260 cri.go:89] found id: "9da22fcbd3de39035ac1a03e1a791e6c51948745f63a0cd5880aee271a0b93c4"
	I1212 00:14:48.331838  500260 cri.go:89] found id: "12d12a2561d73ca125841a672690c99e84b5fb54f11b8b04257c2e1ab7f1a247"
	I1212 00:14:48.331842  500260 cri.go:89] found id: "549fae89400cf4efbe803ae4aa702097163dfc5e9a131def0ae6bbecb4c0601e"
	I1212 00:14:48.331848  500260 cri.go:89] found id: "e4aaf8d36273d3acde491a8cd14406fcfbfeebc26d9e26894b4170e98f011d9a"
	I1212 00:14:48.331854  500260 cri.go:89] found id: "be3ca683626781d8cf4bacd424bf231f28a131d46b225751ad657dc8a00878f1"
	I1212 00:14:48.331858  500260 cri.go:89] found id: "e251865f884a70fae76b65618090dc9e6abcf3315601089443dc5fb1bd026fb1"
	I1212 00:14:48.331861  500260 cri.go:89] found id: "f4dd998c607c5f8351f4c10ea768def06e8e2defafffafca5fe3876d98d9b123"
	I1212 00:14:48.331864  500260 cri.go:89] found id: "10211afe59632799435b4008dd96430e1edb4a1cc399809c32273577dfd7cd61"
	I1212 00:14:48.331869  500260 cri.go:89] found id: "7e478b538e97db66e0de68ed3ade2ff6d3d2420a89b4bad65e8158d500e16aae"
	I1212 00:14:48.331876  500260 cri.go:89] found id: "810bdb88faff8bb6f2eca85e10545aa7edde43a7452f29a88bc8f3d2c032b8df"
	I1212 00:14:48.331879  500260 cri.go:89] found id: "8f971c589eb18130d181fe2c7aa31da3304b9d3a3c2f5c74aa810a8426636a2a"
	I1212 00:14:48.331882  500260 cri.go:89] found id: ""
	I1212 00:14:48.331935  500260 ssh_runner.go:195] Run: sudo runc list -f json
	I1212 00:14:48.346945  500260 out.go:203] 
	W1212 00:14:48.349674  500260 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T00:14:48Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T00:14:48Z" level=error msg="open /run/runc: no such file or directory"
	
	W1212 00:14:48.349696  500260 out.go:285] * 
	* 
	W1212 00:14:48.356283  500260 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 00:14:48.359142  500260 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable inspektor-gadget addon: args "out/minikube-linux-arm64 -p addons-199484 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (6.26s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.44s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 7.207384ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-85b7d694d7-mp4fx" [9036b134-5412-45fa-b9e5-a5e100672fb9] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.006165323s
addons_test.go:465: (dbg) Run:  kubectl --context addons-199484 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-199484 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-199484 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (324.104687ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 00:14:09.924195  499263 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:14:09.925295  499263 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:14:09.925436  499263 out.go:374] Setting ErrFile to fd 2...
	I1212 00:14:09.925456  499263 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:14:09.925734  499263 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-487723/.minikube/bin
	I1212 00:14:09.926053  499263 mustload.go:66] Loading cluster: addons-199484
	I1212 00:14:09.926458  499263 config.go:182] Loaded profile config "addons-199484": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:14:09.926476  499263 addons.go:622] checking whether the cluster is paused
	I1212 00:14:09.926582  499263 config.go:182] Loaded profile config "addons-199484": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:14:09.926597  499263 host.go:66] Checking if "addons-199484" exists ...
	I1212 00:14:09.927179  499263 cli_runner.go:164] Run: docker container inspect addons-199484 --format={{.State.Status}}
	I1212 00:14:09.946856  499263 ssh_runner.go:195] Run: systemctl --version
	I1212 00:14:09.946924  499263 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199484
	I1212 00:14:09.970241  499263 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/addons-199484/id_rsa Username:docker}
	I1212 00:14:10.102355  499263 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 00:14:10.102442  499263 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 00:14:10.153766  499263 cri.go:89] found id: "0b887da02a72ca5e9db931fa0b78ed0c10baf6d179621e8b3b144a117a55809d"
	I1212 00:14:10.153790  499263 cri.go:89] found id: "0104cc6b5dd424933a9226ae25ad89956edefad068d133eb18cfe430e71b64ac"
	I1212 00:14:10.153797  499263 cri.go:89] found id: "c47d48a718439532bd951ca0cdfae6827283df3ae31939423aeb7181a556e753"
	I1212 00:14:10.153800  499263 cri.go:89] found id: "5bff244d59411c9a273c0a7b1039b628bff021a4fcc767ce6393c52e86beb8eb"
	I1212 00:14:10.153804  499263 cri.go:89] found id: "ef710450ec222b3bb9dce827f650fe3b3d671f8886111935c0833ebaf845350b"
	I1212 00:14:10.153807  499263 cri.go:89] found id: "a10cfd4bcf4a619b3dd7e7bec66661fc97cca6bb81d6d42138f98dd802da82b5"
	I1212 00:14:10.153810  499263 cri.go:89] found id: "7adec1475ef10259911007f4aed32a65a52312ba2c8d26c991fc9f115e2afc7e"
	I1212 00:14:10.153813  499263 cri.go:89] found id: "ebe4e7c9ca0fc77879706813f9313a141bda416c1327e75bfc10b883dde9afe7"
	I1212 00:14:10.153816  499263 cri.go:89] found id: "7859d680677c9320a6b97dc99f20c809caf1cf0a0e02a9680dd377acc63b6976"
	I1212 00:14:10.153822  499263 cri.go:89] found id: "e47e9aabb94d9f6577691b06bc3594ad26b704954d37c8f5750e1b8ae813479b"
	I1212 00:14:10.153825  499263 cri.go:89] found id: "afc99c45439ad0e7765f5c4c99793d9a7abbbc0fbe4ee7679500e1d0d406f9cc"
	I1212 00:14:10.153828  499263 cri.go:89] found id: "e021d26e2771d4b836e3c2ff9a3c8340d8a7f191d6258abe1cbcbd9602298f76"
	I1212 00:14:10.153831  499263 cri.go:89] found id: "9da22fcbd3de39035ac1a03e1a791e6c51948745f63a0cd5880aee271a0b93c4"
	I1212 00:14:10.153834  499263 cri.go:89] found id: "12d12a2561d73ca125841a672690c99e84b5fb54f11b8b04257c2e1ab7f1a247"
	I1212 00:14:10.153837  499263 cri.go:89] found id: "549fae89400cf4efbe803ae4aa702097163dfc5e9a131def0ae6bbecb4c0601e"
	I1212 00:14:10.153842  499263 cri.go:89] found id: "e4aaf8d36273d3acde491a8cd14406fcfbfeebc26d9e26894b4170e98f011d9a"
	I1212 00:14:10.153845  499263 cri.go:89] found id: "be3ca683626781d8cf4bacd424bf231f28a131d46b225751ad657dc8a00878f1"
	I1212 00:14:10.153849  499263 cri.go:89] found id: "e251865f884a70fae76b65618090dc9e6abcf3315601089443dc5fb1bd026fb1"
	I1212 00:14:10.153852  499263 cri.go:89] found id: "f4dd998c607c5f8351f4c10ea768def06e8e2defafffafca5fe3876d98d9b123"
	I1212 00:14:10.153855  499263 cri.go:89] found id: "10211afe59632799435b4008dd96430e1edb4a1cc399809c32273577dfd7cd61"
	I1212 00:14:10.153859  499263 cri.go:89] found id: "7e478b538e97db66e0de68ed3ade2ff6d3d2420a89b4bad65e8158d500e16aae"
	I1212 00:14:10.153862  499263 cri.go:89] found id: "810bdb88faff8bb6f2eca85e10545aa7edde43a7452f29a88bc8f3d2c032b8df"
	I1212 00:14:10.153865  499263 cri.go:89] found id: "8f971c589eb18130d181fe2c7aa31da3304b9d3a3c2f5c74aa810a8426636a2a"
	I1212 00:14:10.153867  499263 cri.go:89] found id: ""
	I1212 00:14:10.153918  499263 ssh_runner.go:195] Run: sudo runc list -f json
	I1212 00:14:10.181656  499263 out.go:203] 
	W1212 00:14:10.184574  499263 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T00:14:10Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T00:14:10Z" level=error msg="open /run/runc: no such file or directory"
	
	W1212 00:14:10.184613  499263 out.go:285] * 
	* 
	W1212 00:14:10.191264  499263 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 00:14:10.194204  499263 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable metrics-server addon: args "out/minikube-linux-arm64 -p addons-199484 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.44s)

                                                
                                    
x
+
TestAddons/parallel/CSI (33.27s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1212 00:14:08.841875  490954 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1212 00:14:08.846669  490954 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1212 00:14:08.846843  490954 kapi.go:107] duration metric: took 4.974227ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 4.987462ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-199484 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-199484 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-199484 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-199484 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-199484 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-199484 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-199484 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [44eb5dd3-4b09-43af-82f0-610bc0e36807] Pending
helpers_test.go:353: "task-pv-pod" [44eb5dd3-4b09-43af-82f0-610bc0e36807] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 6.003292594s
addons_test.go:574: (dbg) Run:  kubectl --context addons-199484 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-199484 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:428: (dbg) Run:  kubectl --context addons-199484 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-199484 delete pod task-pv-pod
addons_test.go:590: (dbg) Run:  kubectl --context addons-199484 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-199484 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-199484 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-199484 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-199484 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-199484 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-199484 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-199484 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-199484 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-199484 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-199484 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-199484 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-199484 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-199484 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-199484 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-199484 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-199484 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [563b884f-5753-4f8f-bd5b-1a1b9e2d517f] Pending
helpers_test.go:353: "task-pv-pod-restore" [563b884f-5753-4f8f-bd5b-1a1b9e2d517f] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 6.00421984s
addons_test.go:616: (dbg) Run:  kubectl --context addons-199484 delete pod task-pv-pod-restore
addons_test.go:620: (dbg) Run:  kubectl --context addons-199484 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-199484 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-199484 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-199484 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (260.341275ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 00:14:41.583444  500134 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:14:41.584456  500134 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:14:41.584513  500134 out.go:374] Setting ErrFile to fd 2...
	I1212 00:14:41.584569  500134 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:14:41.585433  500134 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-487723/.minikube/bin
	I1212 00:14:41.585868  500134 mustload.go:66] Loading cluster: addons-199484
	I1212 00:14:41.586409  500134 config.go:182] Loaded profile config "addons-199484": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:14:41.586433  500134 addons.go:622] checking whether the cluster is paused
	I1212 00:14:41.586554  500134 config.go:182] Loaded profile config "addons-199484": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:14:41.586572  500134 host.go:66] Checking if "addons-199484" exists ...
	I1212 00:14:41.587182  500134 cli_runner.go:164] Run: docker container inspect addons-199484 --format={{.State.Status}}
	I1212 00:14:41.605201  500134 ssh_runner.go:195] Run: systemctl --version
	I1212 00:14:41.605268  500134 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199484
	I1212 00:14:41.624354  500134 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/addons-199484/id_rsa Username:docker}
	I1212 00:14:41.729994  500134 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 00:14:41.730083  500134 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 00:14:41.759901  500134 cri.go:89] found id: "0b887da02a72ca5e9db931fa0b78ed0c10baf6d179621e8b3b144a117a55809d"
	I1212 00:14:41.759923  500134 cri.go:89] found id: "0104cc6b5dd424933a9226ae25ad89956edefad068d133eb18cfe430e71b64ac"
	I1212 00:14:41.759928  500134 cri.go:89] found id: "c47d48a718439532bd951ca0cdfae6827283df3ae31939423aeb7181a556e753"
	I1212 00:14:41.759932  500134 cri.go:89] found id: "5bff244d59411c9a273c0a7b1039b628bff021a4fcc767ce6393c52e86beb8eb"
	I1212 00:14:41.759936  500134 cri.go:89] found id: "ef710450ec222b3bb9dce827f650fe3b3d671f8886111935c0833ebaf845350b"
	I1212 00:14:41.759940  500134 cri.go:89] found id: "a10cfd4bcf4a619b3dd7e7bec66661fc97cca6bb81d6d42138f98dd802da82b5"
	I1212 00:14:41.759943  500134 cri.go:89] found id: "7adec1475ef10259911007f4aed32a65a52312ba2c8d26c991fc9f115e2afc7e"
	I1212 00:14:41.759946  500134 cri.go:89] found id: "ebe4e7c9ca0fc77879706813f9313a141bda416c1327e75bfc10b883dde9afe7"
	I1212 00:14:41.759950  500134 cri.go:89] found id: "7859d680677c9320a6b97dc99f20c809caf1cf0a0e02a9680dd377acc63b6976"
	I1212 00:14:41.759967  500134 cri.go:89] found id: "e47e9aabb94d9f6577691b06bc3594ad26b704954d37c8f5750e1b8ae813479b"
	I1212 00:14:41.759971  500134 cri.go:89] found id: "afc99c45439ad0e7765f5c4c99793d9a7abbbc0fbe4ee7679500e1d0d406f9cc"
	I1212 00:14:41.759974  500134 cri.go:89] found id: "e021d26e2771d4b836e3c2ff9a3c8340d8a7f191d6258abe1cbcbd9602298f76"
	I1212 00:14:41.759978  500134 cri.go:89] found id: "9da22fcbd3de39035ac1a03e1a791e6c51948745f63a0cd5880aee271a0b93c4"
	I1212 00:14:41.759981  500134 cri.go:89] found id: "12d12a2561d73ca125841a672690c99e84b5fb54f11b8b04257c2e1ab7f1a247"
	I1212 00:14:41.759984  500134 cri.go:89] found id: "549fae89400cf4efbe803ae4aa702097163dfc5e9a131def0ae6bbecb4c0601e"
	I1212 00:14:41.759991  500134 cri.go:89] found id: "e4aaf8d36273d3acde491a8cd14406fcfbfeebc26d9e26894b4170e98f011d9a"
	I1212 00:14:41.760002  500134 cri.go:89] found id: "be3ca683626781d8cf4bacd424bf231f28a131d46b225751ad657dc8a00878f1"
	I1212 00:14:41.760008  500134 cri.go:89] found id: "e251865f884a70fae76b65618090dc9e6abcf3315601089443dc5fb1bd026fb1"
	I1212 00:14:41.760012  500134 cri.go:89] found id: "f4dd998c607c5f8351f4c10ea768def06e8e2defafffafca5fe3876d98d9b123"
	I1212 00:14:41.760015  500134 cri.go:89] found id: "10211afe59632799435b4008dd96430e1edb4a1cc399809c32273577dfd7cd61"
	I1212 00:14:41.760020  500134 cri.go:89] found id: "7e478b538e97db66e0de68ed3ade2ff6d3d2420a89b4bad65e8158d500e16aae"
	I1212 00:14:41.760023  500134 cri.go:89] found id: "810bdb88faff8bb6f2eca85e10545aa7edde43a7452f29a88bc8f3d2c032b8df"
	I1212 00:14:41.760026  500134 cri.go:89] found id: "8f971c589eb18130d181fe2c7aa31da3304b9d3a3c2f5c74aa810a8426636a2a"
	I1212 00:14:41.760031  500134 cri.go:89] found id: ""
	I1212 00:14:41.760091  500134 ssh_runner.go:195] Run: sudo runc list -f json
	I1212 00:14:41.780838  500134 out.go:203] 
	W1212 00:14:41.783725  500134 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T00:14:41Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T00:14:41Z" level=error msg="open /run/runc: no such file or directory"
	
	W1212 00:14:41.783755  500134 out.go:285] * 
	* 
	W1212 00:14:41.790220  500134 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 00:14:41.793054  500134 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable volumesnapshots addon: args "out/minikube-linux-arm64 -p addons-199484 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-199484 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-199484 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (304.974716ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 00:14:41.850365  500178 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:14:41.854779  500178 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:14:41.854806  500178 out.go:374] Setting ErrFile to fd 2...
	I1212 00:14:41.854813  500178 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:14:41.855141  500178 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-487723/.minikube/bin
	I1212 00:14:41.855486  500178 mustload.go:66] Loading cluster: addons-199484
	I1212 00:14:41.855878  500178 config.go:182] Loaded profile config "addons-199484": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:14:41.855898  500178 addons.go:622] checking whether the cluster is paused
	I1212 00:14:41.856009  500178 config.go:182] Loaded profile config "addons-199484": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:14:41.856027  500178 host.go:66] Checking if "addons-199484" exists ...
	I1212 00:14:41.856545  500178 cli_runner.go:164] Run: docker container inspect addons-199484 --format={{.State.Status}}
	I1212 00:14:41.875777  500178 ssh_runner.go:195] Run: systemctl --version
	I1212 00:14:41.875841  500178 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199484
	I1212 00:14:41.893334  500178 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/addons-199484/id_rsa Username:docker}
	I1212 00:14:41.997544  500178 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 00:14:41.997641  500178 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 00:14:42.057455  500178 cri.go:89] found id: "0b887da02a72ca5e9db931fa0b78ed0c10baf6d179621e8b3b144a117a55809d"
	I1212 00:14:42.057484  500178 cri.go:89] found id: "0104cc6b5dd424933a9226ae25ad89956edefad068d133eb18cfe430e71b64ac"
	I1212 00:14:42.057490  500178 cri.go:89] found id: "c47d48a718439532bd951ca0cdfae6827283df3ae31939423aeb7181a556e753"
	I1212 00:14:42.057499  500178 cri.go:89] found id: "5bff244d59411c9a273c0a7b1039b628bff021a4fcc767ce6393c52e86beb8eb"
	I1212 00:14:42.057503  500178 cri.go:89] found id: "ef710450ec222b3bb9dce827f650fe3b3d671f8886111935c0833ebaf845350b"
	I1212 00:14:42.057506  500178 cri.go:89] found id: "a10cfd4bcf4a619b3dd7e7bec66661fc97cca6bb81d6d42138f98dd802da82b5"
	I1212 00:14:42.057510  500178 cri.go:89] found id: "7adec1475ef10259911007f4aed32a65a52312ba2c8d26c991fc9f115e2afc7e"
	I1212 00:14:42.057513  500178 cri.go:89] found id: "ebe4e7c9ca0fc77879706813f9313a141bda416c1327e75bfc10b883dde9afe7"
	I1212 00:14:42.057516  500178 cri.go:89] found id: "7859d680677c9320a6b97dc99f20c809caf1cf0a0e02a9680dd377acc63b6976"
	I1212 00:14:42.057523  500178 cri.go:89] found id: "e47e9aabb94d9f6577691b06bc3594ad26b704954d37c8f5750e1b8ae813479b"
	I1212 00:14:42.057527  500178 cri.go:89] found id: "afc99c45439ad0e7765f5c4c99793d9a7abbbc0fbe4ee7679500e1d0d406f9cc"
	I1212 00:14:42.057530  500178 cri.go:89] found id: "e021d26e2771d4b836e3c2ff9a3c8340d8a7f191d6258abe1cbcbd9602298f76"
	I1212 00:14:42.057534  500178 cri.go:89] found id: "9da22fcbd3de39035ac1a03e1a791e6c51948745f63a0cd5880aee271a0b93c4"
	I1212 00:14:42.057537  500178 cri.go:89] found id: "12d12a2561d73ca125841a672690c99e84b5fb54f11b8b04257c2e1ab7f1a247"
	I1212 00:14:42.057541  500178 cri.go:89] found id: "549fae89400cf4efbe803ae4aa702097163dfc5e9a131def0ae6bbecb4c0601e"
	I1212 00:14:42.057546  500178 cri.go:89] found id: "e4aaf8d36273d3acde491a8cd14406fcfbfeebc26d9e26894b4170e98f011d9a"
	I1212 00:14:42.057555  500178 cri.go:89] found id: "be3ca683626781d8cf4bacd424bf231f28a131d46b225751ad657dc8a00878f1"
	I1212 00:14:42.057563  500178 cri.go:89] found id: "e251865f884a70fae76b65618090dc9e6abcf3315601089443dc5fb1bd026fb1"
	I1212 00:14:42.057569  500178 cri.go:89] found id: "f4dd998c607c5f8351f4c10ea768def06e8e2defafffafca5fe3876d98d9b123"
	I1212 00:14:42.057572  500178 cri.go:89] found id: "10211afe59632799435b4008dd96430e1edb4a1cc399809c32273577dfd7cd61"
	I1212 00:14:42.057577  500178 cri.go:89] found id: "7e478b538e97db66e0de68ed3ade2ff6d3d2420a89b4bad65e8158d500e16aae"
	I1212 00:14:42.057581  500178 cri.go:89] found id: "810bdb88faff8bb6f2eca85e10545aa7edde43a7452f29a88bc8f3d2c032b8df"
	I1212 00:14:42.057585  500178 cri.go:89] found id: "8f971c589eb18130d181fe2c7aa31da3304b9d3a3c2f5c74aa810a8426636a2a"
	I1212 00:14:42.057588  500178 cri.go:89] found id: ""
	I1212 00:14:42.057664  500178 ssh_runner.go:195] Run: sudo runc list -f json
	I1212 00:14:42.077146  500178 out.go:203] 
	W1212 00:14:42.081185  500178 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T00:14:42Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T00:14:42Z" level=error msg="open /run/runc: no such file or directory"
	
	W1212 00:14:42.081252  500178 out.go:285] * 
	* 
	W1212 00:14:42.095531  500178 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 00:14:42.098795  500178 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-arm64 -p addons-199484 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (33.27s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (3.35s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-199484 --alsologtostderr -v=1
addons_test.go:810: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable headlamp -p addons-199484 --alsologtostderr -v=1: exit status 11 (271.547927ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 00:13:43.512435  498088 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:13:43.513195  498088 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:13:43.513232  498088 out.go:374] Setting ErrFile to fd 2...
	I1212 00:13:43.513255  498088 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:13:43.513544  498088 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-487723/.minikube/bin
	I1212 00:13:43.513931  498088 mustload.go:66] Loading cluster: addons-199484
	I1212 00:13:43.514486  498088 config.go:182] Loaded profile config "addons-199484": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:13:43.514551  498088 addons.go:622] checking whether the cluster is paused
	I1212 00:13:43.514938  498088 config.go:182] Loaded profile config "addons-199484": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:13:43.514986  498088 host.go:66] Checking if "addons-199484" exists ...
	I1212 00:13:43.515592  498088 cli_runner.go:164] Run: docker container inspect addons-199484 --format={{.State.Status}}
	I1212 00:13:43.535097  498088 ssh_runner.go:195] Run: systemctl --version
	I1212 00:13:43.535152  498088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199484
	I1212 00:13:43.553686  498088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/addons-199484/id_rsa Username:docker}
	I1212 00:13:43.665398  498088 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 00:13:43.665489  498088 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 00:13:43.696114  498088 cri.go:89] found id: "0b887da02a72ca5e9db931fa0b78ed0c10baf6d179621e8b3b144a117a55809d"
	I1212 00:13:43.696137  498088 cri.go:89] found id: "0104cc6b5dd424933a9226ae25ad89956edefad068d133eb18cfe430e71b64ac"
	I1212 00:13:43.696142  498088 cri.go:89] found id: "c47d48a718439532bd951ca0cdfae6827283df3ae31939423aeb7181a556e753"
	I1212 00:13:43.696146  498088 cri.go:89] found id: "5bff244d59411c9a273c0a7b1039b628bff021a4fcc767ce6393c52e86beb8eb"
	I1212 00:13:43.696149  498088 cri.go:89] found id: "ef710450ec222b3bb9dce827f650fe3b3d671f8886111935c0833ebaf845350b"
	I1212 00:13:43.696152  498088 cri.go:89] found id: "a10cfd4bcf4a619b3dd7e7bec66661fc97cca6bb81d6d42138f98dd802da82b5"
	I1212 00:13:43.696155  498088 cri.go:89] found id: "7adec1475ef10259911007f4aed32a65a52312ba2c8d26c991fc9f115e2afc7e"
	I1212 00:13:43.696158  498088 cri.go:89] found id: "ebe4e7c9ca0fc77879706813f9313a141bda416c1327e75bfc10b883dde9afe7"
	I1212 00:13:43.696161  498088 cri.go:89] found id: "7859d680677c9320a6b97dc99f20c809caf1cf0a0e02a9680dd377acc63b6976"
	I1212 00:13:43.696170  498088 cri.go:89] found id: "e47e9aabb94d9f6577691b06bc3594ad26b704954d37c8f5750e1b8ae813479b"
	I1212 00:13:43.696173  498088 cri.go:89] found id: "afc99c45439ad0e7765f5c4c99793d9a7abbbc0fbe4ee7679500e1d0d406f9cc"
	I1212 00:13:43.696177  498088 cri.go:89] found id: "e021d26e2771d4b836e3c2ff9a3c8340d8a7f191d6258abe1cbcbd9602298f76"
	I1212 00:13:43.696180  498088 cri.go:89] found id: "9da22fcbd3de39035ac1a03e1a791e6c51948745f63a0cd5880aee271a0b93c4"
	I1212 00:13:43.696184  498088 cri.go:89] found id: "12d12a2561d73ca125841a672690c99e84b5fb54f11b8b04257c2e1ab7f1a247"
	I1212 00:13:43.696187  498088 cri.go:89] found id: "549fae89400cf4efbe803ae4aa702097163dfc5e9a131def0ae6bbecb4c0601e"
	I1212 00:13:43.696195  498088 cri.go:89] found id: "e4aaf8d36273d3acde491a8cd14406fcfbfeebc26d9e26894b4170e98f011d9a"
	I1212 00:13:43.696206  498088 cri.go:89] found id: "be3ca683626781d8cf4bacd424bf231f28a131d46b225751ad657dc8a00878f1"
	I1212 00:13:43.696211  498088 cri.go:89] found id: "e251865f884a70fae76b65618090dc9e6abcf3315601089443dc5fb1bd026fb1"
	I1212 00:13:43.696214  498088 cri.go:89] found id: "f4dd998c607c5f8351f4c10ea768def06e8e2defafffafca5fe3876d98d9b123"
	I1212 00:13:43.696217  498088 cri.go:89] found id: "10211afe59632799435b4008dd96430e1edb4a1cc399809c32273577dfd7cd61"
	I1212 00:13:43.696222  498088 cri.go:89] found id: "7e478b538e97db66e0de68ed3ade2ff6d3d2420a89b4bad65e8158d500e16aae"
	I1212 00:13:43.696225  498088 cri.go:89] found id: "810bdb88faff8bb6f2eca85e10545aa7edde43a7452f29a88bc8f3d2c032b8df"
	I1212 00:13:43.696228  498088 cri.go:89] found id: "8f971c589eb18130d181fe2c7aa31da3304b9d3a3c2f5c74aa810a8426636a2a"
	I1212 00:13:43.696231  498088 cri.go:89] found id: ""
	I1212 00:13:43.696282  498088 ssh_runner.go:195] Run: sudo runc list -f json
	I1212 00:13:43.711804  498088 out.go:203] 
	W1212 00:13:43.714551  498088 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T00:13:43Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T00:13:43Z" level=error msg="open /run/runc: no such file or directory"
	
	W1212 00:13:43.714577  498088 out.go:285] * 
	* 
	W1212 00:13:43.721245  498088 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 00:13:43.724151  498088 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:812: failed to enable headlamp addon: args: "out/minikube-linux-arm64 addons enable headlamp -p addons-199484 --alsologtostderr -v=1": exit status 11
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect addons-199484
helpers_test.go:244: (dbg) docker inspect addons-199484:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ea606c0010f1e62e2edcf855d824c25aff7de69d48b5f85a9b25920ed7d0dac4",
	        "Created": "2025-12-12T00:11:12.261776666Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 492348,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T00:11:12.329565428Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/ea606c0010f1e62e2edcf855d824c25aff7de69d48b5f85a9b25920ed7d0dac4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ea606c0010f1e62e2edcf855d824c25aff7de69d48b5f85a9b25920ed7d0dac4/hostname",
	        "HostsPath": "/var/lib/docker/containers/ea606c0010f1e62e2edcf855d824c25aff7de69d48b5f85a9b25920ed7d0dac4/hosts",
	        "LogPath": "/var/lib/docker/containers/ea606c0010f1e62e2edcf855d824c25aff7de69d48b5f85a9b25920ed7d0dac4/ea606c0010f1e62e2edcf855d824c25aff7de69d48b5f85a9b25920ed7d0dac4-json.log",
	        "Name": "/addons-199484",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-199484:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-199484",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ea606c0010f1e62e2edcf855d824c25aff7de69d48b5f85a9b25920ed7d0dac4",
	                "LowerDir": "/var/lib/docker/overlay2/d51356596acc355d5a6c092cac7e7a8d08960e1901219805b5786939e96f7976-init/diff:/var/lib/docker/overlay2/312acdcca8c5c90ada236fa0dd866f841348e5b8485928af37d3628cccc20197/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d51356596acc355d5a6c092cac7e7a8d08960e1901219805b5786939e96f7976/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d51356596acc355d5a6c092cac7e7a8d08960e1901219805b5786939e96f7976/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d51356596acc355d5a6c092cac7e7a8d08960e1901219805b5786939e96f7976/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-199484",
	                "Source": "/var/lib/docker/volumes/addons-199484/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-199484",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-199484",
	                "name.minikube.sigs.k8s.io": "addons-199484",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "eae61459bc079810e3bdb36dfba9b7a4ed351e3af9cb3236fdaddc4cf5dfe19d",
	            "SandboxKey": "/var/run/docker/netns/eae61459bc07",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33168"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33169"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33172"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33170"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33171"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-199484": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "8a:98:0a:c9:3f:71",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0ce92482db3562eb95f488fcf02c1e6dbbc66a1d250ac9e97b5672f5fb8af901",
	                    "EndpointID": "bd3db023fc14cbf774f28492983aee2fadd2e8070224b972d2973fc38d9c2ece",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-199484",
	                        "ea606c0010f1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-199484 -n addons-199484
helpers_test.go:253: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p addons-199484 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p addons-199484 logs -n 25: (1.571444947s)
helpers_test.go:261: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-813428 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-813428   │ jenkins │ v1.37.0 │ 12 Dec 25 00:10 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 12 Dec 25 00:10 UTC │ 12 Dec 25 00:10 UTC │
	│ delete  │ -p download-only-813428                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-813428   │ jenkins │ v1.37.0 │ 12 Dec 25 00:10 UTC │ 12 Dec 25 00:10 UTC │
	│ start   │ -o=json --download-only -p download-only-539419 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-539419   │ jenkins │ v1.37.0 │ 12 Dec 25 00:10 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 12 Dec 25 00:10 UTC │ 12 Dec 25 00:10 UTC │
	│ delete  │ -p download-only-539419                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-539419   │ jenkins │ v1.37.0 │ 12 Dec 25 00:10 UTC │ 12 Dec 25 00:10 UTC │
	│ start   │ -o=json --download-only -p download-only-510166 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                         │ download-only-510166   │ jenkins │ v1.37.0 │ 12 Dec 25 00:10 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 12 Dec 25 00:11 UTC │ 12 Dec 25 00:11 UTC │
	│ delete  │ -p download-only-510166                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-510166   │ jenkins │ v1.37.0 │ 12 Dec 25 00:11 UTC │ 12 Dec 25 00:11 UTC │
	│ delete  │ -p download-only-813428                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-813428   │ jenkins │ v1.37.0 │ 12 Dec 25 00:11 UTC │ 12 Dec 25 00:11 UTC │
	│ delete  │ -p download-only-539419                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-539419   │ jenkins │ v1.37.0 │ 12 Dec 25 00:11 UTC │ 12 Dec 25 00:11 UTC │
	│ delete  │ -p download-only-510166                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-510166   │ jenkins │ v1.37.0 │ 12 Dec 25 00:11 UTC │ 12 Dec 25 00:11 UTC │
	│ start   │ --download-only -p download-docker-950363 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-950363 │ jenkins │ v1.37.0 │ 12 Dec 25 00:11 UTC │                     │
	│ delete  │ -p download-docker-950363                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-950363 │ jenkins │ v1.37.0 │ 12 Dec 25 00:11 UTC │ 12 Dec 25 00:11 UTC │
	│ start   │ --download-only -p binary-mirror-824111 --alsologtostderr --binary-mirror http://127.0.0.1:35743 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-824111   │ jenkins │ v1.37.0 │ 12 Dec 25 00:11 UTC │                     │
	│ delete  │ -p binary-mirror-824111                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-824111   │ jenkins │ v1.37.0 │ 12 Dec 25 00:11 UTC │ 12 Dec 25 00:11 UTC │
	│ addons  │ enable dashboard -p addons-199484                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-199484          │ jenkins │ v1.37.0 │ 12 Dec 25 00:11 UTC │                     │
	│ addons  │ disable dashboard -p addons-199484                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-199484          │ jenkins │ v1.37.0 │ 12 Dec 25 00:11 UTC │                     │
	│ start   │ -p addons-199484 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-199484          │ jenkins │ v1.37.0 │ 12 Dec 25 00:11 UTC │ 12 Dec 25 00:13 UTC │
	│ addons  │ addons-199484 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-199484          │ jenkins │ v1.37.0 │ 12 Dec 25 00:13 UTC │                     │
	│ addons  │ addons-199484 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-199484          │ jenkins │ v1.37.0 │ 12 Dec 25 00:13 UTC │                     │
	│ addons  │ enable headlamp -p addons-199484 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-199484          │ jenkins │ v1.37.0 │ 12 Dec 25 00:13 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 00:11:05
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 00:11:05.886863  491960 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:11:05.887048  491960 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:11:05.887078  491960 out.go:374] Setting ErrFile to fd 2...
	I1212 00:11:05.887098  491960 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:11:05.887356  491960 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-487723/.minikube/bin
	I1212 00:11:05.887828  491960 out.go:368] Setting JSON to false
	I1212 00:11:05.888701  491960 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":10411,"bootTime":1765487855,"procs":145,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1212 00:11:05.888800  491960 start.go:143] virtualization:  
	I1212 00:11:05.892192  491960 out.go:179] * [addons-199484] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 00:11:05.895875  491960 out.go:179]   - MINIKUBE_LOCATION=22101
	I1212 00:11:05.895984  491960 notify.go:221] Checking for updates...
	I1212 00:11:05.901632  491960 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:11:05.904616  491960 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22101-487723/kubeconfig
	I1212 00:11:05.907673  491960 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-487723/.minikube
	I1212 00:11:05.910753  491960 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 00:11:05.913692  491960 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:11:05.916847  491960 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 00:11:05.946479  491960 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 00:11:05.946620  491960 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:11:06.019080  491960 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-12 00:11:06.009218883 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 00:11:06.019201  491960 docker.go:319] overlay module found
	I1212 00:11:06.022349  491960 out.go:179] * Using the docker driver based on user configuration
	I1212 00:11:06.025292  491960 start.go:309] selected driver: docker
	I1212 00:11:06.025322  491960 start.go:927] validating driver "docker" against <nil>
	I1212 00:11:06.025338  491960 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:11:06.026110  491960 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:11:06.085034  491960 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-12 00:11:06.075938082 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 00:11:06.085224  491960 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1212 00:11:06.085448  491960 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 00:11:06.088316  491960 out.go:179] * Using Docker driver with root privileges
	I1212 00:11:06.091205  491960 cni.go:84] Creating CNI manager for ""
	I1212 00:11:06.091276  491960 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 00:11:06.091291  491960 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1212 00:11:06.091366  491960 start.go:353] cluster config:
	{Name:addons-199484 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-199484 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1212 00:11:06.094454  491960 out.go:179] * Starting "addons-199484" primary control-plane node in "addons-199484" cluster
	I1212 00:11:06.097192  491960 cache.go:134] Beginning downloading kic base image for docker with crio
	I1212 00:11:06.100068  491960 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1212 00:11:06.102817  491960 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 00:11:06.102863  491960 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22101-487723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1212 00:11:06.102877  491960 cache.go:65] Caching tarball of preloaded images
	I1212 00:11:06.102891  491960 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1212 00:11:06.102958  491960 preload.go:238] Found /home/jenkins/minikube-integration/22101-487723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1212 00:11:06.102967  491960 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1212 00:11:06.103295  491960 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/addons-199484/config.json ...
	I1212 00:11:06.103314  491960 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/addons-199484/config.json: {Name:mkb0180a663286b7d6ac48daf7c76698a2b89094 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:11:06.121696  491960 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1212 00:11:06.121719  491960 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1212 00:11:06.121733  491960 cache.go:243] Successfully downloaded all kic artifacts
	I1212 00:11:06.121763  491960 start.go:360] acquireMachinesLock for addons-199484: {Name:mk0ad7b9808d61c7612549b1b854c58edfb0a661 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 00:11:06.121870  491960 start.go:364] duration metric: took 86.349µs to acquireMachinesLock for "addons-199484"
	I1212 00:11:06.121900  491960 start.go:93] Provisioning new machine with config: &{Name:addons-199484 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-199484 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 00:11:06.121966  491960 start.go:125] createHost starting for "" (driver="docker")
	I1212 00:11:06.125369  491960 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1212 00:11:06.125604  491960 start.go:159] libmachine.API.Create for "addons-199484" (driver="docker")
	I1212 00:11:06.125637  491960 client.go:173] LocalClient.Create starting
	I1212 00:11:06.125757  491960 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca.pem
	I1212 00:11:06.365067  491960 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/cert.pem
	I1212 00:11:07.044334  491960 cli_runner.go:164] Run: docker network inspect addons-199484 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1212 00:11:07.064236  491960 cli_runner.go:211] docker network inspect addons-199484 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1212 00:11:07.064322  491960 network_create.go:284] running [docker network inspect addons-199484] to gather additional debugging logs...
	I1212 00:11:07.064352  491960 cli_runner.go:164] Run: docker network inspect addons-199484
	W1212 00:11:07.084577  491960 cli_runner.go:211] docker network inspect addons-199484 returned with exit code 1
	I1212 00:11:07.084610  491960 network_create.go:287] error running [docker network inspect addons-199484]: docker network inspect addons-199484: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-199484 not found
	I1212 00:11:07.084629  491960 network_create.go:289] output of [docker network inspect addons-199484]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-199484 not found
	
	** /stderr **
	I1212 00:11:07.084721  491960 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 00:11:07.102774  491960 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400197caa0}
	I1212 00:11:07.102825  491960 network_create.go:124] attempt to create docker network addons-199484 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1212 00:11:07.102889  491960 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-199484 addons-199484
	I1212 00:11:07.160631  491960 network_create.go:108] docker network addons-199484 192.168.49.0/24 created
	I1212 00:11:07.160658  491960 kic.go:121] calculated static IP "192.168.49.2" for the "addons-199484" container
	I1212 00:11:07.160733  491960 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1212 00:11:07.176071  491960 cli_runner.go:164] Run: docker volume create addons-199484 --label name.minikube.sigs.k8s.io=addons-199484 --label created_by.minikube.sigs.k8s.io=true
	I1212 00:11:07.193398  491960 oci.go:103] Successfully created a docker volume addons-199484
	I1212 00:11:07.193488  491960 cli_runner.go:164] Run: docker run --rm --name addons-199484-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-199484 --entrypoint /usr/bin/test -v addons-199484:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1212 00:11:08.267693  491960 cli_runner.go:217] Completed: docker run --rm --name addons-199484-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-199484 --entrypoint /usr/bin/test -v addons-199484:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib: (1.074154812s)
	I1212 00:11:08.267724  491960 oci.go:107] Successfully prepared a docker volume addons-199484
	I1212 00:11:08.267769  491960 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 00:11:08.267781  491960 kic.go:194] Starting extracting preloaded images to volume ...
	I1212 00:11:08.267845  491960 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22101-487723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-199484:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1212 00:11:12.193578  491960 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22101-487723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-199484:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (3.925686894s)
	I1212 00:11:12.193611  491960 kic.go:203] duration metric: took 3.925825894s to extract preloaded images to volume ...
	W1212 00:11:12.193760  491960 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1212 00:11:12.193879  491960 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1212 00:11:12.248646  491960 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-199484 --name addons-199484 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-199484 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-199484 --network addons-199484 --ip 192.168.49.2 --volume addons-199484:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1212 00:11:12.538937  491960 cli_runner.go:164] Run: docker container inspect addons-199484 --format={{.State.Running}}
	I1212 00:11:12.565751  491960 cli_runner.go:164] Run: docker container inspect addons-199484 --format={{.State.Status}}
	I1212 00:11:12.589012  491960 cli_runner.go:164] Run: docker exec addons-199484 stat /var/lib/dpkg/alternatives/iptables
	I1212 00:11:12.634367  491960 oci.go:144] the created container "addons-199484" has a running status.
	I1212 00:11:12.634406  491960 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22101-487723/.minikube/machines/addons-199484/id_rsa...
	I1212 00:11:13.430010  491960 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22101-487723/.minikube/machines/addons-199484/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1212 00:11:13.463594  491960 cli_runner.go:164] Run: docker container inspect addons-199484 --format={{.State.Status}}
	I1212 00:11:13.487014  491960 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1212 00:11:13.487039  491960 kic_runner.go:114] Args: [docker exec --privileged addons-199484 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1212 00:11:13.534377  491960 cli_runner.go:164] Run: docker container inspect addons-199484 --format={{.State.Status}}
	I1212 00:11:13.554153  491960 machine.go:94] provisionDockerMachine start ...
	I1212 00:11:13.554266  491960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199484
	I1212 00:11:13.571308  491960 main.go:143] libmachine: Using SSH client type: native
	I1212 00:11:13.571911  491960 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33168 <nil> <nil>}
	I1212 00:11:13.571927  491960 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 00:11:13.730490  491960 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-199484
	
	I1212 00:11:13.730511  491960 ubuntu.go:182] provisioning hostname "addons-199484"
	I1212 00:11:13.730577  491960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199484
	I1212 00:11:13.750265  491960 main.go:143] libmachine: Using SSH client type: native
	I1212 00:11:13.750589  491960 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33168 <nil> <nil>}
	I1212 00:11:13.750601  491960 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-199484 && echo "addons-199484" | sudo tee /etc/hostname
	I1212 00:11:13.925013  491960 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-199484
	
	I1212 00:11:13.925175  491960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199484
	I1212 00:11:13.943461  491960 main.go:143] libmachine: Using SSH client type: native
	I1212 00:11:13.943771  491960 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33168 <nil> <nil>}
	I1212 00:11:13.943792  491960 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-199484' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-199484/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-199484' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 00:11:14.094950  491960 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 00:11:14.094979  491960 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22101-487723/.minikube CaCertPath:/home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22101-487723/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22101-487723/.minikube}
	I1212 00:11:14.095008  491960 ubuntu.go:190] setting up certificates
	I1212 00:11:14.095024  491960 provision.go:84] configureAuth start
	I1212 00:11:14.095091  491960 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-199484
	I1212 00:11:14.111642  491960 provision.go:143] copyHostCerts
	I1212 00:11:14.111814  491960 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22101-487723/.minikube/key.pem (1679 bytes)
	I1212 00:11:14.111942  491960 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22101-487723/.minikube/ca.pem (1078 bytes)
	I1212 00:11:14.112012  491960 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22101-487723/.minikube/cert.pem (1123 bytes)
	I1212 00:11:14.112066  491960 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22101-487723/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca-key.pem org=jenkins.addons-199484 san=[127.0.0.1 192.168.49.2 addons-199484 localhost minikube]
	I1212 00:11:14.393430  491960 provision.go:177] copyRemoteCerts
	I1212 00:11:14.393498  491960 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 00:11:14.393541  491960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199484
	I1212 00:11:14.410975  491960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/addons-199484/id_rsa Username:docker}
	I1212 00:11:14.514362  491960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 00:11:14.531389  491960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1212 00:11:14.548878  491960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 00:11:14.565677  491960 provision.go:87] duration metric: took 470.627362ms to configureAuth
	I1212 00:11:14.565703  491960 ubuntu.go:206] setting minikube options for container-runtime
	I1212 00:11:14.565887  491960 config.go:182] Loaded profile config "addons-199484": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:11:14.565988  491960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199484
	I1212 00:11:14.583880  491960 main.go:143] libmachine: Using SSH client type: native
	I1212 00:11:14.584191  491960 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33168 <nil> <nil>}
	I1212 00:11:14.584214  491960 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 00:11:15.064014  491960 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 00:11:15.064101  491960 machine.go:97] duration metric: took 1.509922315s to provisionDockerMachine
	I1212 00:11:15.064127  491960 client.go:176] duration metric: took 8.93847956s to LocalClient.Create
	I1212 00:11:15.064180  491960 start.go:167] duration metric: took 8.938576674s to libmachine.API.Create "addons-199484"
	I1212 00:11:15.064206  491960 start.go:293] postStartSetup for "addons-199484" (driver="docker")
	I1212 00:11:15.064243  491960 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 00:11:15.064359  491960 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 00:11:15.064428  491960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199484
	I1212 00:11:15.082647  491960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/addons-199484/id_rsa Username:docker}
	I1212 00:11:15.190997  491960 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 00:11:15.194363  491960 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 00:11:15.194395  491960 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 00:11:15.194407  491960 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-487723/.minikube/addons for local assets ...
	I1212 00:11:15.194480  491960 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-487723/.minikube/files for local assets ...
	I1212 00:11:15.194508  491960 start.go:296] duration metric: took 130.286128ms for postStartSetup
	I1212 00:11:15.194881  491960 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-199484
	I1212 00:11:15.211523  491960 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/addons-199484/config.json ...
	I1212 00:11:15.211814  491960 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 00:11:15.211869  491960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199484
	I1212 00:11:15.228607  491960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/addons-199484/id_rsa Username:docker}
	I1212 00:11:15.327491  491960 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 00:11:15.331991  491960 start.go:128] duration metric: took 9.210009398s to createHost
	I1212 00:11:15.332016  491960 start.go:83] releasing machines lock for "addons-199484", held for 9.210131832s
	I1212 00:11:15.332086  491960 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-199484
	I1212 00:11:15.348595  491960 ssh_runner.go:195] Run: cat /version.json
	I1212 00:11:15.348655  491960 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 00:11:15.348714  491960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199484
	I1212 00:11:15.348659  491960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199484
	I1212 00:11:15.376209  491960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/addons-199484/id_rsa Username:docker}
	I1212 00:11:15.377214  491960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/addons-199484/id_rsa Username:docker}
	I1212 00:11:15.580882  491960 ssh_runner.go:195] Run: systemctl --version
	I1212 00:11:15.587054  491960 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 00:11:15.630104  491960 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 00:11:15.634167  491960 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 00:11:15.634290  491960 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 00:11:15.663112  491960 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1212 00:11:15.663137  491960 start.go:496] detecting cgroup driver to use...
	I1212 00:11:15.663170  491960 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 00:11:15.663223  491960 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 00:11:15.680144  491960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 00:11:15.692958  491960 docker.go:218] disabling cri-docker service (if available) ...
	I1212 00:11:15.693019  491960 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 00:11:15.711808  491960 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 00:11:15.730188  491960 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 00:11:15.855113  491960 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 00:11:15.981273  491960 docker.go:234] disabling docker service ...
	I1212 00:11:15.981341  491960 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 00:11:16.005759  491960 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 00:11:16.020025  491960 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 00:11:16.138918  491960 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 00:11:16.260150  491960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 00:11:16.273038  491960 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 00:11:16.286950  491960 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1212 00:11:16.287067  491960 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:11:16.295770  491960 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 00:11:16.295882  491960 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:11:16.304994  491960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:11:16.313650  491960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:11:16.322377  491960 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 00:11:16.330602  491960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:11:16.339298  491960 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:11:16.352236  491960 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:11:16.361361  491960 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 00:11:16.368831  491960 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 00:11:16.376059  491960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:11:16.494476  491960 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 00:11:16.661327  491960 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 00:11:16.661439  491960 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 00:11:16.664944  491960 start.go:564] Will wait 60s for crictl version
	I1212 00:11:16.665045  491960 ssh_runner.go:195] Run: which crictl
	I1212 00:11:16.668323  491960 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 00:11:16.695203  491960 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1212 00:11:16.695295  491960 ssh_runner.go:195] Run: crio --version
	I1212 00:11:16.728514  491960 ssh_runner.go:195] Run: crio --version
	I1212 00:11:16.763836  491960 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1212 00:11:16.766754  491960 cli_runner.go:164] Run: docker network inspect addons-199484 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 00:11:16.782570  491960 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1212 00:11:16.786207  491960 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 00:11:16.796087  491960 kubeadm.go:884] updating cluster {Name:addons-199484 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-199484 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 00:11:16.796198  491960 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 00:11:16.796256  491960 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:11:16.834853  491960 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 00:11:16.834878  491960 crio.go:433] Images already preloaded, skipping extraction
	I1212 00:11:16.834935  491960 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:11:16.859703  491960 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 00:11:16.859723  491960 cache_images.go:86] Images are preloaded, skipping loading
	I1212 00:11:16.859731  491960 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1212 00:11:16.859822  491960 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-199484 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-199484 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 00:11:16.859901  491960 ssh_runner.go:195] Run: crio config
	I1212 00:11:16.933667  491960 cni.go:84] Creating CNI manager for ""
	I1212 00:11:16.933698  491960 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 00:11:16.933713  491960 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 00:11:16.933735  491960 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-199484 NodeName:addons-199484 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 00:11:16.933856  491960 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-199484"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 00:11:16.933930  491960 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1212 00:11:16.941353  491960 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 00:11:16.941440  491960 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 00:11:16.948553  491960 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1212 00:11:16.961156  491960 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 00:11:16.973843  491960 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1212 00:11:16.987350  491960 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1212 00:11:16.990745  491960 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 00:11:17.000743  491960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:11:17.112775  491960 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:11:17.137108  491960 certs.go:69] Setting up /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/addons-199484 for IP: 192.168.49.2
	I1212 00:11:17.137140  491960 certs.go:195] generating shared ca certs ...
	I1212 00:11:17.137156  491960 certs.go:227] acquiring lock for ca certs: {Name:mk856824cf2126fa3d2975ef18e195b6ab1234f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:11:17.137312  491960 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22101-487723/.minikube/ca.key
	I1212 00:11:17.545149  491960 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-487723/.minikube/ca.crt ...
	I1212 00:11:17.545181  491960 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-487723/.minikube/ca.crt: {Name:mkc9e8c03ac146bc0b82eb43d2f9f0c2d520900a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:11:17.545373  491960 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-487723/.minikube/ca.key ...
	I1212 00:11:17.545386  491960 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-487723/.minikube/ca.key: {Name:mk0c335221059a76aadf8a9fd23566576e5fa774 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:11:17.545474  491960 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22101-487723/.minikube/proxy-client-ca.key
	I1212 00:11:17.718545  491960 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-487723/.minikube/proxy-client-ca.crt ...
	I1212 00:11:17.718575  491960 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-487723/.minikube/proxy-client-ca.crt: {Name:mk7b8eacc0ff96f20a5cd88df0ad0ddcb911fbd2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:11:17.718784  491960 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-487723/.minikube/proxy-client-ca.key ...
	I1212 00:11:17.718797  491960 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-487723/.minikube/proxy-client-ca.key: {Name:mkcb373edc21f86efa547cdc61540c464c7ee641 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:11:17.718884  491960 certs.go:257] generating profile certs ...
	I1212 00:11:17.718940  491960 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/addons-199484/client.key
	I1212 00:11:17.718958  491960 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/addons-199484/client.crt with IP's: []
	I1212 00:11:17.861464  491960 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/addons-199484/client.crt ...
	I1212 00:11:17.861494  491960 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/addons-199484/client.crt: {Name:mk0ed8c16fe2ae076e351bd642c46cd1523ae12f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:11:17.861670  491960 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/addons-199484/client.key ...
	I1212 00:11:17.861683  491960 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/addons-199484/client.key: {Name:mk9f1923a31411f739dd491cc954c17da22960c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:11:17.861764  491960 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/addons-199484/apiserver.key.e9b1064f
	I1212 00:11:17.861786  491960 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/addons-199484/apiserver.crt.e9b1064f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1212 00:11:18.125621  491960 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/addons-199484/apiserver.crt.e9b1064f ...
	I1212 00:11:18.125654  491960 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/addons-199484/apiserver.crt.e9b1064f: {Name:mk3c9597be8ae0c0b1e17754be606142bffc8b12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:11:18.125839  491960 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/addons-199484/apiserver.key.e9b1064f ...
	I1212 00:11:18.125853  491960 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/addons-199484/apiserver.key.e9b1064f: {Name:mk3131f100cd68e2ae2c35057b05df204d041bcc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:11:18.125942  491960 certs.go:382] copying /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/addons-199484/apiserver.crt.e9b1064f -> /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/addons-199484/apiserver.crt
	I1212 00:11:18.126030  491960 certs.go:386] copying /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/addons-199484/apiserver.key.e9b1064f -> /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/addons-199484/apiserver.key
	I1212 00:11:18.126082  491960 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/addons-199484/proxy-client.key
	I1212 00:11:18.126102  491960 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/addons-199484/proxy-client.crt with IP's: []
	I1212 00:11:18.811836  491960 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/addons-199484/proxy-client.crt ...
	I1212 00:11:18.811868  491960 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/addons-199484/proxy-client.crt: {Name:mk5b01029265356fd9c38f5ba3fd7dc73f714a38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:11:18.812051  491960 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/addons-199484/proxy-client.key ...
	I1212 00:11:18.812066  491960 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/addons-199484/proxy-client.key: {Name:mk1b394b95f3b182c866f6a971a134dfd92db86a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:11:18.812261  491960 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca-key.pem (1679 bytes)
	I1212 00:11:18.812309  491960 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca.pem (1078 bytes)
	I1212 00:11:18.812341  491960 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/cert.pem (1123 bytes)
	I1212 00:11:18.812377  491960 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/key.pem (1679 bytes)
	I1212 00:11:18.812982  491960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 00:11:18.831984  491960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 00:11:18.850205  491960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 00:11:18.868456  491960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 00:11:18.885961  491960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/addons-199484/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1212 00:11:18.903301  491960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/addons-199484/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 00:11:18.920680  491960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/addons-199484/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 00:11:18.937845  491960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/addons-199484/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 00:11:18.955002  491960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 00:11:18.972638  491960 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 00:11:18.986672  491960 ssh_runner.go:195] Run: openssl version
	I1212 00:11:18.992890  491960 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:11:19.000328  491960 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 00:11:19.009300  491960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:11:19.013133  491960 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 00:11 /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:11:19.013207  491960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:11:19.054156  491960 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 00:11:19.061544  491960 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1212 00:11:19.069325  491960 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 00:11:19.072751  491960 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1212 00:11:19.072800  491960 kubeadm.go:401] StartCluster: {Name:addons-199484 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-199484 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:11:19.072883  491960 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 00:11:19.072946  491960 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 00:11:19.098304  491960 cri.go:89] found id: ""
	I1212 00:11:19.098371  491960 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 00:11:19.106041  491960 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 00:11:19.113502  491960 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 00:11:19.113594  491960 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 00:11:19.121100  491960 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 00:11:19.121121  491960 kubeadm.go:158] found existing configuration files:
	
	I1212 00:11:19.121179  491960 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 00:11:19.128779  491960 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 00:11:19.128847  491960 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 00:11:19.136076  491960 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 00:11:19.143613  491960 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 00:11:19.143703  491960 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 00:11:19.151115  491960 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 00:11:19.158650  491960 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 00:11:19.158772  491960 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 00:11:19.165980  491960 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 00:11:19.173669  491960 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 00:11:19.173851  491960 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 00:11:19.181219  491960 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 00:11:19.223236  491960 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1212 00:11:19.223621  491960 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 00:11:19.246992  491960 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 00:11:19.247068  491960 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1212 00:11:19.247116  491960 kubeadm.go:319] OS: Linux
	I1212 00:11:19.247166  491960 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 00:11:19.247218  491960 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 00:11:19.247268  491960 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 00:11:19.247319  491960 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 00:11:19.247370  491960 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 00:11:19.247422  491960 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 00:11:19.247471  491960 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 00:11:19.247522  491960 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 00:11:19.247571  491960 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 00:11:19.318389  491960 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 00:11:19.318503  491960 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 00:11:19.318606  491960 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 00:11:19.335908  491960 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 00:11:19.342744  491960 out.go:252]   - Generating certificates and keys ...
	I1212 00:11:19.342873  491960 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 00:11:19.342960  491960 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 00:11:19.404492  491960 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 00:11:20.388610  491960 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1212 00:11:21.086529  491960 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1212 00:11:22.397741  491960 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1212 00:11:22.736326  491960 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1212 00:11:22.736501  491960 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-199484 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1212 00:11:23.161487  491960 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1212 00:11:23.161628  491960 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-199484 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1212 00:11:23.666327  491960 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 00:11:25.200235  491960 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 00:11:26.033358  491960 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1212 00:11:26.033484  491960 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 00:11:26.656649  491960 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 00:11:27.002645  491960 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 00:11:27.375747  491960 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 00:11:27.436330  491960 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 00:11:28.355907  491960 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 00:11:28.356916  491960 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 00:11:28.360096  491960 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 00:11:28.365478  491960 out.go:252]   - Booting up control plane ...
	I1212 00:11:28.365581  491960 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 00:11:28.365659  491960 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 00:11:28.365725  491960 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 00:11:28.380495  491960 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 00:11:28.380612  491960 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 00:11:28.387824  491960 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 00:11:28.388129  491960 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 00:11:28.388398  491960 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 00:11:28.512170  491960 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 00:11:28.512299  491960 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 00:11:29.512420  491960 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001118148s
	I1212 00:11:29.516152  491960 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1212 00:11:29.516255  491960 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1212 00:11:29.516351  491960 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1212 00:11:29.516451  491960 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1212 00:11:33.484524  491960 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.967434247s
	I1212 00:11:35.038209  491960 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.522033833s
	I1212 00:11:36.019811  491960 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.502878073s
	I1212 00:11:36.076596  491960 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 00:11:36.095166  491960 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 00:11:36.113396  491960 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 00:11:36.113822  491960 kubeadm.go:319] [mark-control-plane] Marking the node addons-199484 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 00:11:36.129172  491960 kubeadm.go:319] [bootstrap-token] Using token: 7jijhb.2uwctot57jgsdbqp
	I1212 00:11:36.132221  491960 out.go:252]   - Configuring RBAC rules ...
	I1212 00:11:36.132348  491960 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 00:11:36.141975  491960 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 00:11:36.152365  491960 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 00:11:36.157591  491960 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 00:11:36.162053  491960 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 00:11:36.169603  491960 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 00:11:36.430702  491960 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 00:11:36.867398  491960 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1212 00:11:37.433176  491960 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1212 00:11:37.434284  491960 kubeadm.go:319] 
	I1212 00:11:37.434366  491960 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1212 00:11:37.434376  491960 kubeadm.go:319] 
	I1212 00:11:37.434454  491960 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1212 00:11:37.434463  491960 kubeadm.go:319] 
	I1212 00:11:37.434494  491960 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1212 00:11:37.434575  491960 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 00:11:37.434642  491960 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 00:11:37.434648  491960 kubeadm.go:319] 
	I1212 00:11:37.434723  491960 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1212 00:11:37.434728  491960 kubeadm.go:319] 
	I1212 00:11:37.434775  491960 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 00:11:37.434778  491960 kubeadm.go:319] 
	I1212 00:11:37.434830  491960 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1212 00:11:37.434905  491960 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 00:11:37.434973  491960 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 00:11:37.434977  491960 kubeadm.go:319] 
	I1212 00:11:37.435061  491960 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 00:11:37.435154  491960 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1212 00:11:37.435158  491960 kubeadm.go:319] 
	I1212 00:11:37.435244  491960 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 7jijhb.2uwctot57jgsdbqp \
	I1212 00:11:37.435346  491960 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ce44b751a4efdc166c14df937c34bee22cb46fcbf4350caae3257de1fd27835c \
	I1212 00:11:37.435367  491960 kubeadm.go:319] 	--control-plane 
	I1212 00:11:37.435370  491960 kubeadm.go:319] 
	I1212 00:11:37.435455  491960 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1212 00:11:37.435458  491960 kubeadm.go:319] 
	I1212 00:11:37.435540  491960 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 7jijhb.2uwctot57jgsdbqp \
	I1212 00:11:37.435644  491960 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ce44b751a4efdc166c14df937c34bee22cb46fcbf4350caae3257de1fd27835c 
	I1212 00:11:37.439213  491960 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1212 00:11:37.439442  491960 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1212 00:11:37.439547  491960 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 00:11:37.439564  491960 cni.go:84] Creating CNI manager for ""
	I1212 00:11:37.439572  491960 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 00:11:37.444438  491960 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1212 00:11:37.447320  491960 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1212 00:11:37.451440  491960 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1212 00:11:37.451465  491960 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1212 00:11:37.465588  491960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1212 00:11:37.750954  491960 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 00:11:37.751094  491960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:11:37.751176  491960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-199484 minikube.k8s.io/updated_at=2025_12_12T00_11_37_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c04ca15b4c226075dd018d362cd996ac712bf2c0 minikube.k8s.io/name=addons-199484 minikube.k8s.io/primary=true
	I1212 00:11:37.768307  491960 ops.go:34] apiserver oom_adj: -16
	I1212 00:11:37.925151  491960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:11:38.425749  491960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:11:38.925437  491960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:11:39.425296  491960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:11:39.925511  491960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:11:40.425317  491960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:11:40.925342  491960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:11:41.425921  491960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:11:41.925269  491960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:11:42.425236  491960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 00:11:42.520295  491960 kubeadm.go:1114] duration metric: took 4.769249486s to wait for elevateKubeSystemPrivileges
	I1212 00:11:42.520322  491960 kubeadm.go:403] duration metric: took 23.447525529s to StartCluster
	I1212 00:11:42.520339  491960 settings.go:142] acquiring lock: {Name:mk274c10b2238dc32d72b68ac2e1ec517b8a72b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:11:42.520447  491960 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22101-487723/kubeconfig
	I1212 00:11:42.520840  491960 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-487723/kubeconfig: {Name:mk40d877648a1b47389942ad828ec218ac64f642 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:11:42.521022  491960 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 00:11:42.521240  491960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 00:11:42.521399  491960 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1212 00:11:42.521500  491960 addons.go:70] Setting yakd=true in profile "addons-199484"
	I1212 00:11:42.521518  491960 addons.go:239] Setting addon yakd=true in "addons-199484"
	I1212 00:11:42.521552  491960 host.go:66] Checking if "addons-199484" exists ...
	I1212 00:11:42.521611  491960 config.go:182] Loaded profile config "addons-199484": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:11:42.521664  491960 addons.go:70] Setting inspektor-gadget=true in profile "addons-199484"
	I1212 00:11:42.521689  491960 addons.go:239] Setting addon inspektor-gadget=true in "addons-199484"
	I1212 00:11:42.521731  491960 host.go:66] Checking if "addons-199484" exists ...
	I1212 00:11:42.522100  491960 cli_runner.go:164] Run: docker container inspect addons-199484 --format={{.State.Status}}
	I1212 00:11:42.522432  491960 cli_runner.go:164] Run: docker container inspect addons-199484 --format={{.State.Status}}
	I1212 00:11:42.523006  491960 addons.go:70] Setting metrics-server=true in profile "addons-199484"
	I1212 00:11:42.523032  491960 addons.go:239] Setting addon metrics-server=true in "addons-199484"
	I1212 00:11:42.523057  491960 host.go:66] Checking if "addons-199484" exists ...
	I1212 00:11:42.523521  491960 cli_runner.go:164] Run: docker container inspect addons-199484 --format={{.State.Status}}
	I1212 00:11:42.523789  491960 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-199484"
	I1212 00:11:42.523913  491960 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-199484"
	I1212 00:11:42.523959  491960 host.go:66] Checking if "addons-199484" exists ...
	I1212 00:11:42.524575  491960 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-199484"
	I1212 00:11:42.524627  491960 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-199484"
	I1212 00:11:42.524663  491960 host.go:66] Checking if "addons-199484" exists ...
	I1212 00:11:42.525152  491960 cli_runner.go:164] Run: docker container inspect addons-199484 --format={{.State.Status}}
	I1212 00:11:42.525705  491960 cli_runner.go:164] Run: docker container inspect addons-199484 --format={{.State.Status}}
	I1212 00:11:42.525714  491960 addons.go:70] Setting registry=true in profile "addons-199484"
	I1212 00:11:42.542879  491960 addons.go:239] Setting addon registry=true in "addons-199484"
	I1212 00:11:42.542927  491960 host.go:66] Checking if "addons-199484" exists ...
	I1212 00:11:42.543399  491960 cli_runner.go:164] Run: docker container inspect addons-199484 --format={{.State.Status}}
	I1212 00:11:42.525724  491960 addons.go:70] Setting registry-creds=true in profile "addons-199484"
	I1212 00:11:42.550782  491960 addons.go:239] Setting addon registry-creds=true in "addons-199484"
	I1212 00:11:42.550831  491960 host.go:66] Checking if "addons-199484" exists ...
	I1212 00:11:42.551329  491960 cli_runner.go:164] Run: docker container inspect addons-199484 --format={{.State.Status}}
	I1212 00:11:42.525732  491960 addons.go:70] Setting storage-provisioner=true in profile "addons-199484"
	I1212 00:11:42.558846  491960 addons.go:239] Setting addon storage-provisioner=true in "addons-199484"
	I1212 00:11:42.558887  491960 host.go:66] Checking if "addons-199484" exists ...
	I1212 00:11:42.525736  491960 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-199484"
	I1212 00:11:42.559124  491960 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-199484"
	I1212 00:11:42.559377  491960 cli_runner.go:164] Run: docker container inspect addons-199484 --format={{.State.Status}}
	I1212 00:11:42.525739  491960 addons.go:70] Setting volcano=true in profile "addons-199484"
	I1212 00:11:42.585636  491960 addons.go:239] Setting addon volcano=true in "addons-199484"
	I1212 00:11:42.585683  491960 host.go:66] Checking if "addons-199484" exists ...
	I1212 00:11:42.586188  491960 cli_runner.go:164] Run: docker container inspect addons-199484 --format={{.State.Status}}
	I1212 00:11:42.589814  491960 cli_runner.go:164] Run: docker container inspect addons-199484 --format={{.State.Status}}
	I1212 00:11:42.525746  491960 addons.go:70] Setting volumesnapshots=true in profile "addons-199484"
	I1212 00:11:42.616035  491960 addons.go:239] Setting addon volumesnapshots=true in "addons-199484"
	I1212 00:11:42.616089  491960 host.go:66] Checking if "addons-199484" exists ...
	I1212 00:11:42.525771  491960 out.go:179] * Verifying Kubernetes components...
	I1212 00:11:42.525983  491960 addons.go:70] Setting gcp-auth=true in profile "addons-199484"
	I1212 00:11:42.525990  491960 addons.go:70] Setting cloud-spanner=true in profile "addons-199484"
	I1212 00:11:42.525994  491960 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-199484"
	I1212 00:11:42.525997  491960 addons.go:70] Setting default-storageclass=true in profile "addons-199484"
	I1212 00:11:42.638889  491960 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-199484"
	I1212 00:11:42.639504  491960 cli_runner.go:164] Run: docker container inspect addons-199484 --format={{.State.Status}}
	I1212 00:11:42.526011  491960 addons.go:70] Setting ingress-dns=true in profile "addons-199484"
	I1212 00:11:42.653599  491960 addons.go:239] Setting addon ingress-dns=true in "addons-199484"
	I1212 00:11:42.653657  491960 host.go:66] Checking if "addons-199484" exists ...
	I1212 00:11:42.654128  491960 cli_runner.go:164] Run: docker container inspect addons-199484 --format={{.State.Status}}
	I1212 00:11:42.654286  491960 addons.go:239] Setting addon cloud-spanner=true in "addons-199484"
	I1212 00:11:42.654319  491960 host.go:66] Checking if "addons-199484" exists ...
	I1212 00:11:42.654910  491960 cli_runner.go:164] Run: docker container inspect addons-199484 --format={{.State.Status}}
	I1212 00:11:42.526016  491960 addons.go:70] Setting ingress=true in profile "addons-199484"
	I1212 00:11:42.665216  491960 addons.go:239] Setting addon ingress=true in "addons-199484"
	I1212 00:11:42.665262  491960 host.go:66] Checking if "addons-199484" exists ...
	I1212 00:11:42.665735  491960 cli_runner.go:164] Run: docker container inspect addons-199484 --format={{.State.Status}}
	I1212 00:11:42.670463  491960 cli_runner.go:164] Run: docker container inspect addons-199484 --format={{.State.Status}}
	I1212 00:11:42.687911  491960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:11:42.688176  491960 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1212 00:11:42.696958  491960 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1212 00:11:42.697012  491960 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1212 00:11:42.697029  491960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1212 00:11:42.697095  491960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199484
	I1212 00:11:42.699321  491960 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-199484"
	I1212 00:11:42.699373  491960 host.go:66] Checking if "addons-199484" exists ...
	I1212 00:11:42.699850  491960 cli_runner.go:164] Run: docker container inspect addons-199484 --format={{.State.Status}}
	I1212 00:11:42.735108  491960 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1212 00:11:42.735239  491960 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1212 00:11:42.738214  491960 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1212 00:11:42.738244  491960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1212 00:11:42.738314  491960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199484
	I1212 00:11:42.738574  491960 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1212 00:11:42.738606  491960 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1212 00:11:42.738655  491960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199484
	I1212 00:11:42.745360  491960 mustload.go:66] Loading cluster: addons-199484
	I1212 00:11:42.745583  491960 config.go:182] Loaded profile config "addons-199484": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:11:42.745927  491960 cli_runner.go:164] Run: docker container inspect addons-199484 --format={{.State.Status}}
	I1212 00:11:42.749732  491960 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1212 00:11:42.752758  491960 out.go:179]   - Using image docker.io/registry:3.0.0
	I1212 00:11:42.755921  491960 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1212 00:11:42.755977  491960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1212 00:11:42.756043  491960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199484
	I1212 00:11:42.789120  491960 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1212 00:11:42.789146  491960 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1212 00:11:42.789321  491960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199484
	I1212 00:11:42.801702  491960 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1212 00:11:42.804586  491960 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1212 00:11:42.804611  491960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1212 00:11:42.804688  491960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199484
	I1212 00:11:42.810109  491960 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-199484"
	I1212 00:11:42.810150  491960 host.go:66] Checking if "addons-199484" exists ...
	I1212 00:11:42.815097  491960 cli_runner.go:164] Run: docker container inspect addons-199484 --format={{.State.Status}}
	I1212 00:11:42.827309  491960 addons.go:239] Setting addon default-storageclass=true in "addons-199484"
	I1212 00:11:42.827372  491960 host.go:66] Checking if "addons-199484" exists ...
	I1212 00:11:42.828789  491960 cli_runner.go:164] Run: docker container inspect addons-199484 --format={{.State.Status}}
	I1212 00:11:42.833801  491960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1212 00:11:42.837130  491960 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I1212 00:11:42.840279  491960 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1212 00:11:42.845705  491960 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1212 00:11:42.845966  491960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/addons-199484/id_rsa Username:docker}
	I1212 00:11:42.847653  491960 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1212 00:11:42.847765  491960 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1212 00:11:42.849035  491960 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1212 00:11:42.868761  491960 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1212 00:11:42.876555  491960 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1212 00:11:42.878831  491960 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1212 00:11:42.878889  491960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1212 00:11:42.879028  491960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199484
	I1212 00:11:42.881381  491960 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:11:42.881402  491960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 00:11:42.881464  491960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199484
	I1212 00:11:42.893893  491960 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1212 00:11:42.893919  491960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1212 00:11:42.893998  491960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199484
	I1212 00:11:42.914942  491960 host.go:66] Checking if "addons-199484" exists ...
	I1212 00:11:42.922480  491960 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1212 00:11:42.922503  491960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1212 00:11:42.922569  491960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199484
	I1212 00:11:42.928295  491960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/addons-199484/id_rsa Username:docker}
	I1212 00:11:42.928869  491960 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1212 00:11:42.928883  491960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1212 00:11:42.928973  491960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199484
	I1212 00:11:42.956803  491960 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1212 00:11:42.959786  491960 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1212 00:11:42.959815  491960 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1212 00:11:42.959893  491960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199484
	I1212 00:11:42.978310  491960 out.go:179]   - Using image docker.io/busybox:stable
	I1212 00:11:43.037487  491960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/addons-199484/id_rsa Username:docker}
	I1212 00:11:43.042842  491960 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 00:11:43.042919  491960 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 00:11:43.042997  491960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199484
	I1212 00:11:43.057579  491960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/addons-199484/id_rsa Username:docker}
	I1212 00:11:43.058589  491960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/addons-199484/id_rsa Username:docker}
	I1212 00:11:43.060926  491960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/addons-199484/id_rsa Username:docker}
	I1212 00:11:43.065888  491960 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1212 00:11:43.066042  491960 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1212 00:11:43.069056  491960 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1212 00:11:43.069209  491960 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1212 00:11:43.069245  491960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1212 00:11:43.069313  491960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199484
	I1212 00:11:43.099300  491960 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1212 00:11:43.105696  491960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/addons-199484/id_rsa Username:docker}
	I1212 00:11:43.117390  491960 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1212 00:11:43.119358  491960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/addons-199484/id_rsa Username:docker}
	I1212 00:11:43.131480  491960 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1212 00:11:43.135235  491960 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1212 00:11:43.137110  491960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/addons-199484/id_rsa Username:docker}
	I1212 00:11:43.142912  491960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/addons-199484/id_rsa Username:docker}
	I1212 00:11:43.146771  491960 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1212 00:11:43.149828  491960 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1212 00:11:43.154830  491960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/addons-199484/id_rsa Username:docker}
	I1212 00:11:43.155357  491960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/addons-199484/id_rsa Username:docker}
	I1212 00:11:43.156524  491960 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1212 00:11:43.156540  491960 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1212 00:11:43.156599  491960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199484
	I1212 00:11:43.188583  491960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/addons-199484/id_rsa Username:docker}
	I1212 00:11:43.195686  491960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/addons-199484/id_rsa Username:docker}
	I1212 00:11:43.206940  491960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/addons-199484/id_rsa Username:docker}
	I1212 00:11:43.332670  491960 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:11:43.637987  491960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1212 00:11:43.761965  491960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1212 00:11:43.767424  491960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:11:43.822889  491960 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1212 00:11:43.822913  491960 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1212 00:11:43.864020  491960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1212 00:11:43.866134  491960 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1212 00:11:43.866154  491960 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1212 00:11:43.868710  491960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1212 00:11:43.881640  491960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1212 00:11:43.894134  491960 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1212 00:11:43.894205  491960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1212 00:11:43.901466  491960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1212 00:11:43.910813  491960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:11:43.925850  491960 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1212 00:11:43.925923  491960 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1212 00:11:43.943147  491960 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1212 00:11:43.943172  491960 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1212 00:11:43.974426  491960 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1212 00:11:43.974461  491960 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1212 00:11:44.004307  491960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1212 00:11:44.040983  491960 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1212 00:11:44.041009  491960 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1212 00:11:44.046163  491960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1212 00:11:44.054578  491960 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1212 00:11:44.054603  491960 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1212 00:11:44.078608  491960 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1212 00:11:44.078633  491960 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1212 00:11:44.145014  491960 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1212 00:11:44.145037  491960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1212 00:11:44.155324  491960 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1212 00:11:44.155357  491960 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1212 00:11:44.158060  491960 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1212 00:11:44.158088  491960 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1212 00:11:44.286948  491960 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 00:11:44.286981  491960 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1212 00:11:44.291467  491960 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1212 00:11:44.291498  491960 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1212 00:11:44.293147  491960 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1212 00:11:44.293164  491960 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1212 00:11:44.379664  491960 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1212 00:11:44.379690  491960 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1212 00:11:44.449672  491960 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1212 00:11:44.449704  491960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1212 00:11:44.471382  491960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1212 00:11:44.499970  491960 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1212 00:11:44.499992  491960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1212 00:11:44.554408  491960 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1212 00:11:44.554441  491960 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1212 00:11:44.604264  491960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 00:11:44.659887  491960 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.82604916s)
	I1212 00:11:44.659927  491960 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1212 00:11:44.659990  491960 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.327292341s)
	I1212 00:11:44.660745  491960 node_ready.go:35] waiting up to 6m0s for node "addons-199484" to be "Ready" ...
	I1212 00:11:44.663857  491960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1212 00:11:44.731784  491960 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1212 00:11:44.731811  491960 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1212 00:11:44.822285  491960 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1212 00:11:44.822308  491960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1212 00:11:44.839776  491960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1212 00:11:44.944551  491960 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1212 00:11:44.944577  491960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1212 00:11:45.166583  491960 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-199484" context rescaled to 1 replicas
	I1212 00:11:45.287199  491960 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1212 00:11:45.287227  491960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1212 00:11:45.540098  491960 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1212 00:11:45.540123  491960 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1212 00:11:45.761306  491960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1212 00:11:46.679345  491960 node_ready.go:57] node "addons-199484" has "Ready":"False" status (will retry)
	I1212 00:11:47.880217  491960 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (4.242188202s)
	I1212 00:11:47.880373  491960 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (4.11838341s)
	I1212 00:11:47.880415  491960 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.112968295s)
	I1212 00:11:47.880477  491960 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.016432642s)
	I1212 00:11:48.957162  491960 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.088375886s)
	I1212 00:11:48.957383  491960 addons.go:495] Verifying addon ingress=true in "addons-199484"
	I1212 00:11:48.957242  491960 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.075531823s)
	I1212 00:11:48.957251  491960 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (5.055702651s)
	I1212 00:11:48.957261  491960 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.046382264s)
	I1212 00:11:48.957268  491960 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.952937246s)
	I1212 00:11:48.957277  491960 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.911091312s)
	I1212 00:11:48.957286  491960 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.485881397s)
	I1212 00:11:48.957703  491960 addons.go:495] Verifying addon registry=true in "addons-199484"
	I1212 00:11:48.957308  491960 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.353006501s)
	I1212 00:11:48.958219  491960 addons.go:495] Verifying addon metrics-server=true in "addons-199484"
	I1212 00:11:48.957319  491960 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.293435573s)
	I1212 00:11:48.957339  491960 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.117529464s)
	W1212 00:11:48.958296  491960 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1212 00:11:48.958315  491960 retry.go:31] will retry after 371.995665ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1212 00:11:48.961987  491960 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-199484 service yakd-dashboard -n yakd-dashboard
	
	I1212 00:11:48.962142  491960 out.go:179] * Verifying registry addon...
	I1212 00:11:48.962187  491960 out.go:179] * Verifying ingress addon...
	I1212 00:11:48.966986  491960 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1212 00:11:48.968012  491960 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	W1212 00:11:48.977809  491960 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1212 00:11:48.988384  491960 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1212 00:11:48.988416  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:11:48.988578  491960 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1212 00:11:48.988603  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1212 00:11:49.175793  491960 node_ready.go:57] node "addons-199484" has "Ready":"False" status (will retry)
	I1212 00:11:49.223412  491960 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.462062388s)
	I1212 00:11:49.223448  491960 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-199484"
	I1212 00:11:49.228466  491960 out.go:179] * Verifying csi-hostpath-driver addon...
	I1212 00:11:49.232229  491960 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1212 00:11:49.239465  491960 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1212 00:11:49.239493  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:11:49.331173  491960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1212 00:11:49.472603  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:11:49.473419  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:11:49.735811  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:11:49.972318  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:11:49.973174  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:11:50.236271  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:11:50.471417  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:11:50.471577  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:11:50.526758  491960 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1212 00:11:50.526861  491960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199484
	I1212 00:11:50.543016  491960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/addons-199484/id_rsa Username:docker}
	I1212 00:11:50.660174  491960 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1212 00:11:50.674649  491960 addons.go:239] Setting addon gcp-auth=true in "addons-199484"
	I1212 00:11:50.674724  491960 host.go:66] Checking if "addons-199484" exists ...
	I1212 00:11:50.675192  491960 cli_runner.go:164] Run: docker container inspect addons-199484 --format={{.State.Status}}
	I1212 00:11:50.692885  491960 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1212 00:11:50.692934  491960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199484
	I1212 00:11:50.710451  491960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/addons-199484/id_rsa Username:docker}
	I1212 00:11:50.735962  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:11:50.970545  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:11:50.975557  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:11:51.235584  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:11:51.471140  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:11:51.471241  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1212 00:11:51.669481  491960 node_ready.go:57] node "addons-199484" has "Ready":"False" status (will retry)
	I1212 00:11:51.739567  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:11:51.972326  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:11:51.973190  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:11:52.020579  491960 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.689354521s)
	I1212 00:11:52.020600  491960 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.327690125s)
	I1212 00:11:52.023538  491960 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1212 00:11:52.026389  491960 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1212 00:11:52.029183  491960 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1212 00:11:52.029212  491960 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1212 00:11:52.042802  491960 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1212 00:11:52.042866  491960 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1212 00:11:52.056509  491960 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1212 00:11:52.056534  491960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1212 00:11:52.070760  491960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1212 00:11:52.235851  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:11:52.477568  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:11:52.478291  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:11:52.570717  491960 addons.go:495] Verifying addon gcp-auth=true in "addons-199484"
	I1212 00:11:52.573900  491960 out.go:179] * Verifying gcp-auth addon...
	I1212 00:11:52.578411  491960 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1212 00:11:52.588791  491960 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1212 00:11:52.588833  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:11:52.737454  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:11:52.970523  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:11:52.971693  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:11:53.081919  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:11:53.235463  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:11:53.470297  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:11:53.470793  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:11:53.581826  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:11:53.735619  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:11:53.971637  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:11:53.971794  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:11:54.081794  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1212 00:11:54.163476  491960 node_ready.go:57] node "addons-199484" has "Ready":"False" status (will retry)
	I1212 00:11:54.235239  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:11:54.471382  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:11:54.472739  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:11:54.582032  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:11:54.735692  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:11:54.970845  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:11:54.971406  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:11:55.081330  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:11:55.236029  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:11:55.470065  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:11:55.471062  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:11:55.581821  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:11:55.735548  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:11:55.971142  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:11:55.971258  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:11:56.082303  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1212 00:11:56.164202  491960 node_ready.go:57] node "addons-199484" has "Ready":"False" status (will retry)
	I1212 00:11:56.235832  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:11:56.471132  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:11:56.471395  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:11:56.582177  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:11:56.735638  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:11:56.970959  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:11:56.971015  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:11:57.082140  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:11:57.235558  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:11:57.471061  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:11:57.472139  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:11:57.582368  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:11:57.735904  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:11:57.970122  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:11:57.971104  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:11:58.082370  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1212 00:11:58.164474  491960 node_ready.go:57] node "addons-199484" has "Ready":"False" status (will retry)
	I1212 00:11:58.235529  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:11:58.471602  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:11:58.471935  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:11:58.582069  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:11:58.735432  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:11:58.970980  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:11:58.971314  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:11:59.082211  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:11:59.235660  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:11:59.471599  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:11:59.472047  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:11:59.581733  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:11:59.735342  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:11:59.971614  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:11:59.971869  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:00.098185  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1212 00:12:00.164880  491960 node_ready.go:57] node "addons-199484" has "Ready":"False" status (will retry)
	I1212 00:12:00.243430  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:00.470445  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:00.472292  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:00.582321  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:00.735627  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:00.971351  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:00.971736  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:01.081664  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:01.235148  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:01.471182  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:01.471484  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:01.581153  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:01.735660  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:01.971404  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:01.972144  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:02.081834  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:02.235740  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:02.471413  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:02.471457  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:02.581258  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1212 00:12:02.664500  491960 node_ready.go:57] node "addons-199484" has "Ready":"False" status (will retry)
	I1212 00:12:02.735371  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:02.971013  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:02.971528  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:03.081488  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:03.235201  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:03.470675  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:03.471379  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:03.581246  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:03.735892  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:03.972229  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:03.972430  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:04.082388  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:04.235818  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:04.471087  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:04.471294  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:04.582083  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:04.736000  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:04.970297  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:04.971903  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:05.081681  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1212 00:12:05.164481  491960 node_ready.go:57] node "addons-199484" has "Ready":"False" status (will retry)
	I1212 00:12:05.235514  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:05.471225  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:05.471837  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:05.581589  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:05.735431  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:05.971454  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:05.971759  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:06.081931  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:06.235605  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:06.470304  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:06.471591  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:06.581566  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:06.735416  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:06.970397  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:06.971414  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:07.081243  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:07.235708  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:07.471220  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:07.471370  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:07.582406  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1212 00:12:07.664069  491960 node_ready.go:57] node "addons-199484" has "Ready":"False" status (will retry)
	I1212 00:12:07.736221  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:07.970353  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:07.971449  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:08.081541  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:08.236222  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:08.470159  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:08.471620  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:08.581408  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:08.736220  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:08.970483  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:08.970986  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:09.081905  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:09.235359  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:09.470587  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:09.471664  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:09.581458  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1212 00:12:09.664350  491960 node_ready.go:57] node "addons-199484" has "Ready":"False" status (will retry)
	I1212 00:12:09.735910  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:09.972110  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:09.972618  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:10.081670  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:10.235003  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:10.470111  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:10.471089  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:10.581811  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:10.735811  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:10.976440  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:10.976643  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:11.081384  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:11.242855  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:11.469904  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:11.470966  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:11.582065  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:11.735716  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:11.972243  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:11.972527  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:12.082255  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1212 00:12:12.164242  491960 node_ready.go:57] node "addons-199484" has "Ready":"False" status (will retry)
	I1212 00:12:12.235553  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:12.470540  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:12.471747  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:12.582068  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:12.735552  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:12.971666  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:12.972037  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:13.082032  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:13.235840  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:13.471058  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:13.471188  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:13.582039  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:13.735431  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:13.971106  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:13.971276  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:14.081259  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:14.235867  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:14.469700  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:14.471083  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:14.581815  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1212 00:12:14.663558  491960 node_ready.go:57] node "addons-199484" has "Ready":"False" status (will retry)
	I1212 00:12:14.735364  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:14.970557  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:14.971066  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:15.082284  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:15.236221  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:15.471368  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:15.471504  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:15.581514  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:15.735479  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:15.971921  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:15.972042  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:16.081908  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:16.235723  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:16.470936  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:16.471158  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:16.582106  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1212 00:12:16.663636  491960 node_ready.go:57] node "addons-199484" has "Ready":"False" status (will retry)
	I1212 00:12:16.735524  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:16.971531  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:16.971668  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:17.081459  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:17.236075  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:17.470035  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:17.471234  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:17.582135  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:17.735701  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:17.970853  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:17.971030  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:18.082626  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:18.236059  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:18.470222  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:18.471071  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:18.582025  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:18.735335  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:18.971120  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:18.971872  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:19.081896  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1212 00:12:19.163449  491960 node_ready.go:57] node "addons-199484" has "Ready":"False" status (will retry)
	I1212 00:12:19.235309  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:19.470445  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:19.471699  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:19.581610  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:19.735254  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:19.970402  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:19.971004  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:20.082069  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:20.235914  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:20.471012  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:20.471123  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:20.581555  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:20.735883  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:20.970762  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:20.970914  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:21.081713  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1212 00:12:21.163810  491960 node_ready.go:57] node "addons-199484" has "Ready":"False" status (will retry)
	I1212 00:12:21.235954  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:21.469750  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:21.470820  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:21.581682  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:21.735599  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:21.971855  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:21.971874  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:22.081491  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:22.235969  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:22.471551  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:22.471783  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:22.581715  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:22.735056  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:22.971719  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:22.971886  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:23.081820  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:23.248130  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:23.514483  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:23.514921  491960 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1212 00:12:23.514941  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:23.667575  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:23.694953  491960 node_ready.go:49] node "addons-199484" is "Ready"
	I1212 00:12:23.694987  491960 node_ready.go:38] duration metric: took 39.03421277s for node "addons-199484" to be "Ready" ...
	I1212 00:12:23.695034  491960 api_server.go:52] waiting for apiserver process to appear ...
	I1212 00:12:23.695120  491960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:12:23.740849  491960 api_server.go:72] duration metric: took 41.219799312s to wait for apiserver process to appear ...
	I1212 00:12:23.740871  491960 api_server.go:88] waiting for apiserver healthz status ...
	I1212 00:12:23.740889  491960 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1212 00:12:23.758606  491960 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1212 00:12:23.758625  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:23.761871  491960 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1212 00:12:23.763204  491960 api_server.go:141] control plane version: v1.34.2
	I1212 00:12:23.763234  491960 api_server.go:131] duration metric: took 22.356204ms to wait for apiserver health ...
	I1212 00:12:23.763243  491960 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 00:12:23.770261  491960 system_pods.go:59] 19 kube-system pods found
	I1212 00:12:23.770304  491960 system_pods.go:61] "coredns-66bc5c9577-jx5nq" [1441bde5-8d17-4f82-b545-2205e14d0ec2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 00:12:23.770311  491960 system_pods.go:61] "csi-hostpath-attacher-0" [c0602985-5c62-4851-a954-f6726d8e981d] Pending
	I1212 00:12:23.770316  491960 system_pods.go:61] "csi-hostpath-resizer-0" [e8a9d55a-83af-40e8-8cb5-e26e1b1a17c9] Pending
	I1212 00:12:23.770357  491960 system_pods.go:61] "csi-hostpathplugin-pwldg" [599bce84-aff5-4988-84d6-dc8737e43905] Pending
	I1212 00:12:23.770368  491960 system_pods.go:61] "etcd-addons-199484" [6faaa3a3-4bb7-4e75-b91a-4982ecb6c267] Running
	I1212 00:12:23.770372  491960 system_pods.go:61] "kindnet-5nsn6" [be8e608b-8151-43d8-8758-427ba1e34b96] Running
	I1212 00:12:23.770377  491960 system_pods.go:61] "kube-apiserver-addons-199484" [e78bc7f1-6c37-4bf1-82c1-9393394e6746] Running
	I1212 00:12:23.770380  491960 system_pods.go:61] "kube-controller-manager-addons-199484" [2c62e9d6-a02c-4f11-b22c-0b2e27111fe2] Running
	I1212 00:12:23.770385  491960 system_pods.go:61] "kube-ingress-dns-minikube" [fb6b5ab3-3661-4649-b69f-3fd26a0642c0] Pending
	I1212 00:12:23.770388  491960 system_pods.go:61] "kube-proxy-67nfx" [df3b1547-ea52-41f3-bd8d-c7c6cebfdb3e] Running
	I1212 00:12:23.770398  491960 system_pods.go:61] "kube-scheduler-addons-199484" [0f189033-4b7e-4e83-a065-43e54bac390a] Running
	I1212 00:12:23.770404  491960 system_pods.go:61] "metrics-server-85b7d694d7-mp4fx" [9036b134-5412-45fa-b9e5-a5e100672fb9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 00:12:23.770424  491960 system_pods.go:61] "nvidia-device-plugin-daemonset-4jhc7" [cb7157aa-9a23-4982-ad13-4bef3501f23d] Pending
	I1212 00:12:23.770439  491960 system_pods.go:61] "registry-6b586f9694-d69pq" [c6dc399c-e268-46e8-a3ea-8929470b439b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1212 00:12:23.770458  491960 system_pods.go:61] "registry-creds-764b6fb674-mf9j5" [7fa7165f-4d24-4504-b416-3d9ed68b1a7f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1212 00:12:23.770469  491960 system_pods.go:61] "registry-proxy-rj8pk" [c6c691b6-a31f-4583-bd46-b7cbd62968ed] Pending
	I1212 00:12:23.770477  491960 system_pods.go:61] "snapshot-controller-7d9fbc56b8-lxshs" [d74f74dd-2db5-463e-9f47-e5876c71de58] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1212 00:12:23.770489  491960 system_pods.go:61] "snapshot-controller-7d9fbc56b8-p7d72" [b7a508ac-02c6-4950-9190-2b77aa44f343] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1212 00:12:23.770508  491960 system_pods.go:61] "storage-provisioner" [ed62f61e-3d7c-4056-a513-06dc0821d283] Pending
	I1212 00:12:23.770523  491960 system_pods.go:74] duration metric: took 7.258555ms to wait for pod list to return data ...
	I1212 00:12:23.770533  491960 default_sa.go:34] waiting for default service account to be created ...
	I1212 00:12:23.788034  491960 default_sa.go:45] found service account: "default"
	I1212 00:12:23.788079  491960 default_sa.go:55] duration metric: took 17.535284ms for default service account to be created ...
	I1212 00:12:23.788091  491960 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 00:12:23.800452  491960 system_pods.go:86] 19 kube-system pods found
	I1212 00:12:23.800489  491960 system_pods.go:89] "coredns-66bc5c9577-jx5nq" [1441bde5-8d17-4f82-b545-2205e14d0ec2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 00:12:23.800496  491960 system_pods.go:89] "csi-hostpath-attacher-0" [c0602985-5c62-4851-a954-f6726d8e981d] Pending
	I1212 00:12:23.800502  491960 system_pods.go:89] "csi-hostpath-resizer-0" [e8a9d55a-83af-40e8-8cb5-e26e1b1a17c9] Pending
	I1212 00:12:23.800506  491960 system_pods.go:89] "csi-hostpathplugin-pwldg" [599bce84-aff5-4988-84d6-dc8737e43905] Pending
	I1212 00:12:23.800531  491960 system_pods.go:89] "etcd-addons-199484" [6faaa3a3-4bb7-4e75-b91a-4982ecb6c267] Running
	I1212 00:12:23.800544  491960 system_pods.go:89] "kindnet-5nsn6" [be8e608b-8151-43d8-8758-427ba1e34b96] Running
	I1212 00:12:23.800549  491960 system_pods.go:89] "kube-apiserver-addons-199484" [e78bc7f1-6c37-4bf1-82c1-9393394e6746] Running
	I1212 00:12:23.800554  491960 system_pods.go:89] "kube-controller-manager-addons-199484" [2c62e9d6-a02c-4f11-b22c-0b2e27111fe2] Running
	I1212 00:12:23.800565  491960 system_pods.go:89] "kube-ingress-dns-minikube" [fb6b5ab3-3661-4649-b69f-3fd26a0642c0] Pending
	I1212 00:12:23.800572  491960 system_pods.go:89] "kube-proxy-67nfx" [df3b1547-ea52-41f3-bd8d-c7c6cebfdb3e] Running
	I1212 00:12:23.800576  491960 system_pods.go:89] "kube-scheduler-addons-199484" [0f189033-4b7e-4e83-a065-43e54bac390a] Running
	I1212 00:12:23.800582  491960 system_pods.go:89] "metrics-server-85b7d694d7-mp4fx" [9036b134-5412-45fa-b9e5-a5e100672fb9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 00:12:23.800590  491960 system_pods.go:89] "nvidia-device-plugin-daemonset-4jhc7" [cb7157aa-9a23-4982-ad13-4bef3501f23d] Pending
	I1212 00:12:23.800616  491960 system_pods.go:89] "registry-6b586f9694-d69pq" [c6dc399c-e268-46e8-a3ea-8929470b439b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1212 00:12:23.800638  491960 system_pods.go:89] "registry-creds-764b6fb674-mf9j5" [7fa7165f-4d24-4504-b416-3d9ed68b1a7f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1212 00:12:23.800643  491960 system_pods.go:89] "registry-proxy-rj8pk" [c6c691b6-a31f-4583-bd46-b7cbd62968ed] Pending
	I1212 00:12:23.800650  491960 system_pods.go:89] "snapshot-controller-7d9fbc56b8-lxshs" [d74f74dd-2db5-463e-9f47-e5876c71de58] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1212 00:12:23.800663  491960 system_pods.go:89] "snapshot-controller-7d9fbc56b8-p7d72" [b7a508ac-02c6-4950-9190-2b77aa44f343] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1212 00:12:23.800667  491960 system_pods.go:89] "storage-provisioner" [ed62f61e-3d7c-4056-a513-06dc0821d283] Pending
	I1212 00:12:23.800689  491960 retry.go:31] will retry after 198.161172ms: missing components: kube-dns
	I1212 00:12:23.980385  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:23.980635  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:24.041468  491960 system_pods.go:86] 19 kube-system pods found
	I1212 00:12:24.041508  491960 system_pods.go:89] "coredns-66bc5c9577-jx5nq" [1441bde5-8d17-4f82-b545-2205e14d0ec2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 00:12:24.041515  491960 system_pods.go:89] "csi-hostpath-attacher-0" [c0602985-5c62-4851-a954-f6726d8e981d] Pending
	I1212 00:12:24.041557  491960 system_pods.go:89] "csi-hostpath-resizer-0" [e8a9d55a-83af-40e8-8cb5-e26e1b1a17c9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1212 00:12:24.041571  491960 system_pods.go:89] "csi-hostpathplugin-pwldg" [599bce84-aff5-4988-84d6-dc8737e43905] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1212 00:12:24.041577  491960 system_pods.go:89] "etcd-addons-199484" [6faaa3a3-4bb7-4e75-b91a-4982ecb6c267] Running
	I1212 00:12:24.041590  491960 system_pods.go:89] "kindnet-5nsn6" [be8e608b-8151-43d8-8758-427ba1e34b96] Running
	I1212 00:12:24.041594  491960 system_pods.go:89] "kube-apiserver-addons-199484" [e78bc7f1-6c37-4bf1-82c1-9393394e6746] Running
	I1212 00:12:24.041599  491960 system_pods.go:89] "kube-controller-manager-addons-199484" [2c62e9d6-a02c-4f11-b22c-0b2e27111fe2] Running
	I1212 00:12:24.041623  491960 system_pods.go:89] "kube-ingress-dns-minikube" [fb6b5ab3-3661-4649-b69f-3fd26a0642c0] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1212 00:12:24.041635  491960 system_pods.go:89] "kube-proxy-67nfx" [df3b1547-ea52-41f3-bd8d-c7c6cebfdb3e] Running
	I1212 00:12:24.041641  491960 system_pods.go:89] "kube-scheduler-addons-199484" [0f189033-4b7e-4e83-a065-43e54bac390a] Running
	I1212 00:12:24.041649  491960 system_pods.go:89] "metrics-server-85b7d694d7-mp4fx" [9036b134-5412-45fa-b9e5-a5e100672fb9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 00:12:24.041661  491960 system_pods.go:89] "nvidia-device-plugin-daemonset-4jhc7" [cb7157aa-9a23-4982-ad13-4bef3501f23d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1212 00:12:24.041669  491960 system_pods.go:89] "registry-6b586f9694-d69pq" [c6dc399c-e268-46e8-a3ea-8929470b439b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1212 00:12:24.041680  491960 system_pods.go:89] "registry-creds-764b6fb674-mf9j5" [7fa7165f-4d24-4504-b416-3d9ed68b1a7f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1212 00:12:24.041698  491960 system_pods.go:89] "registry-proxy-rj8pk" [c6c691b6-a31f-4583-bd46-b7cbd62968ed] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1212 00:12:24.041712  491960 system_pods.go:89] "snapshot-controller-7d9fbc56b8-lxshs" [d74f74dd-2db5-463e-9f47-e5876c71de58] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1212 00:12:24.041719  491960 system_pods.go:89] "snapshot-controller-7d9fbc56b8-p7d72" [b7a508ac-02c6-4950-9190-2b77aa44f343] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1212 00:12:24.041743  491960 system_pods.go:89] "storage-provisioner" [ed62f61e-3d7c-4056-a513-06dc0821d283] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 00:12:24.041758  491960 retry.go:31] will retry after 364.11001ms: missing components: kube-dns
	I1212 00:12:24.107145  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:24.253926  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:24.411110  491960 system_pods.go:86] 19 kube-system pods found
	I1212 00:12:24.411146  491960 system_pods.go:89] "coredns-66bc5c9577-jx5nq" [1441bde5-8d17-4f82-b545-2205e14d0ec2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 00:12:24.411157  491960 system_pods.go:89] "csi-hostpath-attacher-0" [c0602985-5c62-4851-a954-f6726d8e981d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1212 00:12:24.411204  491960 system_pods.go:89] "csi-hostpath-resizer-0" [e8a9d55a-83af-40e8-8cb5-e26e1b1a17c9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1212 00:12:24.411212  491960 system_pods.go:89] "csi-hostpathplugin-pwldg" [599bce84-aff5-4988-84d6-dc8737e43905] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1212 00:12:24.411223  491960 system_pods.go:89] "etcd-addons-199484" [6faaa3a3-4bb7-4e75-b91a-4982ecb6c267] Running
	I1212 00:12:24.411229  491960 system_pods.go:89] "kindnet-5nsn6" [be8e608b-8151-43d8-8758-427ba1e34b96] Running
	I1212 00:12:24.411234  491960 system_pods.go:89] "kube-apiserver-addons-199484" [e78bc7f1-6c37-4bf1-82c1-9393394e6746] Running
	I1212 00:12:24.411238  491960 system_pods.go:89] "kube-controller-manager-addons-199484" [2c62e9d6-a02c-4f11-b22c-0b2e27111fe2] Running
	I1212 00:12:24.411260  491960 system_pods.go:89] "kube-ingress-dns-minikube" [fb6b5ab3-3661-4649-b69f-3fd26a0642c0] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1212 00:12:24.411276  491960 system_pods.go:89] "kube-proxy-67nfx" [df3b1547-ea52-41f3-bd8d-c7c6cebfdb3e] Running
	I1212 00:12:24.411281  491960 system_pods.go:89] "kube-scheduler-addons-199484" [0f189033-4b7e-4e83-a065-43e54bac390a] Running
	I1212 00:12:24.411287  491960 system_pods.go:89] "metrics-server-85b7d694d7-mp4fx" [9036b134-5412-45fa-b9e5-a5e100672fb9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 00:12:24.411314  491960 system_pods.go:89] "nvidia-device-plugin-daemonset-4jhc7" [cb7157aa-9a23-4982-ad13-4bef3501f23d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1212 00:12:24.411322  491960 system_pods.go:89] "registry-6b586f9694-d69pq" [c6dc399c-e268-46e8-a3ea-8929470b439b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1212 00:12:24.411328  491960 system_pods.go:89] "registry-creds-764b6fb674-mf9j5" [7fa7165f-4d24-4504-b416-3d9ed68b1a7f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1212 00:12:24.411343  491960 system_pods.go:89] "registry-proxy-rj8pk" [c6c691b6-a31f-4583-bd46-b7cbd62968ed] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1212 00:12:24.411356  491960 system_pods.go:89] "snapshot-controller-7d9fbc56b8-lxshs" [d74f74dd-2db5-463e-9f47-e5876c71de58] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1212 00:12:24.411375  491960 system_pods.go:89] "snapshot-controller-7d9fbc56b8-p7d72" [b7a508ac-02c6-4950-9190-2b77aa44f343] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1212 00:12:24.411391  491960 system_pods.go:89] "storage-provisioner" [ed62f61e-3d7c-4056-a513-06dc0821d283] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 00:12:24.411411  491960 system_pods.go:126] duration metric: took 623.290756ms to wait for k8s-apps to be running ...
	I1212 00:12:24.411427  491960 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 00:12:24.411505  491960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:12:24.427503  491960 system_svc.go:56] duration metric: took 16.067385ms WaitForService to wait for kubelet
	I1212 00:12:24.427530  491960 kubeadm.go:587] duration metric: took 41.90648597s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 00:12:24.427573  491960 node_conditions.go:102] verifying NodePressure condition ...
	I1212 00:12:24.430566  491960 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1212 00:12:24.430597  491960 node_conditions.go:123] node cpu capacity is 2
	I1212 00:12:24.430613  491960 node_conditions.go:105] duration metric: took 3.026455ms to run NodePressure ...
	I1212 00:12:24.430641  491960 start.go:242] waiting for startup goroutines ...
	I1212 00:12:24.471708  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:24.473466  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:24.582030  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:24.736869  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:24.972828  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:24.972998  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:25.082615  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:25.235888  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:25.472532  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:25.473638  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:25.581677  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:25.736174  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:25.972360  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:25.972552  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:26.081347  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:26.240500  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:26.473026  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:26.473163  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:26.582122  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:26.735654  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:26.972692  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:26.973178  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:27.083076  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:27.236867  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:27.473790  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:27.473946  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:27.582836  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:27.736364  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:27.971311  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:27.971573  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:28.082207  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:28.237235  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:28.472198  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:28.472544  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:28.581533  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:28.736271  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:28.984335  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:28.984710  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:29.093918  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:29.236585  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:29.471169  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:29.471344  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:29.581848  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:29.736377  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:29.970078  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:29.972302  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:30.082979  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:30.236485  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:30.473315  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:30.473476  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:30.581414  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:30.735666  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:30.970849  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:30.971001  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:31.082836  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:31.236796  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:31.475869  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:31.476357  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:31.582007  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:31.736582  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:31.972796  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:31.973197  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:32.082636  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:32.236223  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:32.473367  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:32.473940  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:32.582834  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:32.736627  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:32.972425  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:32.974149  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:33.082579  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:33.236219  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:33.470521  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:33.471543  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:33.581166  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:33.736655  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:33.973895  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:33.974060  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:34.082606  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:34.237881  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:34.476128  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:34.477905  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:34.587864  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:34.737541  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:34.974141  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:34.974272  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:35.084027  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:35.237611  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:35.473442  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:35.474149  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:35.582425  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:35.736287  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:35.973452  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:35.973845  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:36.082518  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:36.237619  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:36.476018  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:36.476167  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:36.582898  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:36.737152  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:36.972249  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:36.973869  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:37.082234  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:37.235880  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:37.472539  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:37.472928  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:37.582877  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:37.736140  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:37.972080  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:37.973572  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:38.082083  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:38.240436  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:38.472013  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:38.472165  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:38.582514  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:38.735888  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:38.971566  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:38.973107  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:39.082485  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:39.236705  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:39.473951  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:39.474484  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:39.581174  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:39.735264  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:39.972044  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:39.972534  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:40.082412  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:40.236095  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:40.472723  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:40.472918  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:40.582407  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:40.737255  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:40.973001  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:40.973455  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:41.081790  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:41.242144  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:41.471948  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:41.472436  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:41.582435  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:41.736054  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:41.970533  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:41.971331  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:42.082438  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:42.237181  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:42.471555  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:42.473486  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:42.581672  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:42.736631  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:42.979285  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:42.979640  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:43.082073  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:43.235672  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:43.474053  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:43.474584  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:43.581862  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:43.736074  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:43.972911  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:43.973550  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:44.083075  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:44.237393  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:44.470972  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:44.471408  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:44.581409  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:44.768852  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:44.974394  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:44.974817  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:45.086425  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:45.236600  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:45.473954  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:45.474397  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:45.588221  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:45.736202  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:45.973310  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:45.973761  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:46.084227  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:46.237602  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:46.477338  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:46.477768  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:46.582264  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:46.736633  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:46.972858  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:46.973266  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:47.082825  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:47.236499  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:47.473319  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:47.473759  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:47.582826  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:47.736717  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:47.971998  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:47.972326  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:48.082745  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:48.237876  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:48.471676  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:48.472303  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:48.582569  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:48.736226  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:48.972526  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:48.972702  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:49.081844  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:49.236404  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:49.474513  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:49.474750  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:49.582070  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:49.737367  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:49.973860  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:49.973988  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:50.081824  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:50.235929  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:50.479970  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:50.480111  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:50.581838  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:50.736503  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:50.971196  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:50.971309  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:51.082407  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:51.235543  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:51.473045  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:51.473219  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:51.583010  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:51.736668  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:51.973107  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:51.973392  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:52.081404  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:52.235522  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:52.472032  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:52.473159  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:52.582808  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:52.736218  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:52.972784  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:52.973490  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:53.081558  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:53.236240  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:53.470020  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:53.472589  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:53.588057  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:53.736526  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:53.971976  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:53.972093  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:54.081891  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:54.235934  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:54.471189  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:54.472317  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:54.581585  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:54.736515  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:54.971245  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:54.971606  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:55.081442  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:55.236311  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:55.470504  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:55.471684  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:55.582514  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:55.735772  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:55.970761  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:55.970969  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:56.082423  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:56.235987  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:56.470484  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:56.473545  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:56.581385  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:56.736330  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:56.972621  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:56.973033  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:57.082004  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:57.236222  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:57.470399  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:57.473262  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:57.582971  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:57.736841  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:57.971101  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:57.972236  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:58.082435  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:58.242284  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:58.471103  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 00:12:58.472347  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:58.611524  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:58.736135  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:58.970742  491960 kapi.go:107] duration metric: took 1m10.003785164s to wait for kubernetes.io/minikube-addons=registry ...
	I1212 00:12:58.971384  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:59.082543  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:59.235571  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:59.472082  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:12:59.582498  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:12:59.736000  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:12:59.971742  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:00.104598  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:00.242494  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:00.472066  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:00.582441  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:00.735882  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:00.971780  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:01.082020  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:01.236517  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:01.472021  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:01.582331  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:01.736034  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:01.973818  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:02.081525  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:02.236461  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:02.472652  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:02.582169  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:02.736808  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:02.972522  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:03.081908  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:03.236231  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:03.471532  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:03.587546  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:03.736053  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:03.971465  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:04.082547  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:04.235798  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:04.471398  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:04.582771  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:04.736813  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:04.971589  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:05.082179  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:05.235204  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:05.471942  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:05.582512  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:05.736267  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:05.972515  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:06.081956  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:06.236612  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:06.476460  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:06.603009  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:06.737266  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:06.971759  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:07.081643  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:07.236066  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:07.471315  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:07.585124  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:07.736125  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:07.971803  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:08.083007  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:08.240392  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:08.473834  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:08.581659  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:08.737946  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:08.971308  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:09.103849  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:09.236958  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:09.474818  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:09.582672  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:09.740348  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:09.972075  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:10.083378  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:10.236791  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:10.472707  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:10.582491  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:10.736910  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:10.972175  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:11.082316  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:11.236266  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:11.471408  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:11.581672  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:11.736617  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:11.973466  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:12.082248  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:12.235788  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:12.471295  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:12.582760  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:12.736995  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:12.971852  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:13.082663  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:13.235893  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:13.471529  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:13.581982  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:13.736210  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:13.971413  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:14.082142  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:14.236368  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:14.471408  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:14.581327  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:14.735534  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:14.973480  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:15.082055  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:15.249070  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:15.471821  491960 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 00:13:15.581932  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:15.736821  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:15.971926  491960 kapi.go:107] duration metric: took 1m27.003898399s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1212 00:13:16.082012  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:16.236260  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:16.631711  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:16.736093  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:17.082498  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:17.236401  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:17.581242  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:17.736187  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:18.081915  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:18.236763  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:18.583619  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:18.736207  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:19.081599  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:19.236445  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:19.581599  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:19.735858  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:20.081423  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:20.236375  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:20.581742  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:20.736174  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:21.083182  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:21.236627  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:21.581670  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:21.742046  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 00:13:22.084719  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:22.236093  491960 kapi.go:107] duration metric: took 1m33.003867734s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1212 00:13:22.581075  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:23.082179  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:23.582980  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:24.081780  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:24.582058  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:25.081646  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:25.581985  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:26.081989  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:26.581641  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:27.082570  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:27.582546  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:28.082719  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:28.582626  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:29.082803  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:29.583663  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:30.082925  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:30.582477  491960 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 00:13:31.082585  491960 kapi.go:107] duration metric: took 1m38.504176513s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1212 00:13:31.085587  491960 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-199484 cluster.
	I1212 00:13:31.088469  491960 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1212 00:13:31.091368  491960 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1212 00:13:31.094266  491960 out.go:179] * Enabled addons: inspektor-gadget, registry-creds, storage-provisioner, cloud-spanner, nvidia-device-plugin, amd-gpu-device-plugin, ingress-dns, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I1212 00:13:31.097174  491960 addons.go:530] duration metric: took 1m48.575766633s for enable addons: enabled=[inspektor-gadget registry-creds storage-provisioner cloud-spanner nvidia-device-plugin amd-gpu-device-plugin ingress-dns metrics-server yakd storage-provisioner-rancher volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I1212 00:13:31.097236  491960 start.go:247] waiting for cluster config update ...
	I1212 00:13:31.097261  491960 start.go:256] writing updated cluster config ...
	I1212 00:13:31.097588  491960 ssh_runner.go:195] Run: rm -f paused
	I1212 00:13:31.103693  491960 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 00:13:31.109294  491960 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-jx5nq" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:13:31.115112  491960 pod_ready.go:94] pod "coredns-66bc5c9577-jx5nq" is "Ready"
	I1212 00:13:31.115144  491960 pod_ready.go:86] duration metric: took 5.811699ms for pod "coredns-66bc5c9577-jx5nq" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:13:31.117680  491960 pod_ready.go:83] waiting for pod "etcd-addons-199484" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:13:31.123385  491960 pod_ready.go:94] pod "etcd-addons-199484" is "Ready"
	I1212 00:13:31.123417  491960 pod_ready.go:86] duration metric: took 5.705207ms for pod "etcd-addons-199484" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:13:31.131121  491960 pod_ready.go:83] waiting for pod "kube-apiserver-addons-199484" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:13:31.136832  491960 pod_ready.go:94] pod "kube-apiserver-addons-199484" is "Ready"
	I1212 00:13:31.136864  491960 pod_ready.go:86] duration metric: took 5.713814ms for pod "kube-apiserver-addons-199484" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:13:31.139485  491960 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-199484" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:13:31.507295  491960 pod_ready.go:94] pod "kube-controller-manager-addons-199484" is "Ready"
	I1212 00:13:31.507322  491960 pod_ready.go:86] duration metric: took 367.809688ms for pod "kube-controller-manager-addons-199484" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:13:31.707749  491960 pod_ready.go:83] waiting for pod "kube-proxy-67nfx" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:13:32.108536  491960 pod_ready.go:94] pod "kube-proxy-67nfx" is "Ready"
	I1212 00:13:32.108561  491960 pod_ready.go:86] duration metric: took 400.7853ms for pod "kube-proxy-67nfx" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:13:32.307780  491960 pod_ready.go:83] waiting for pod "kube-scheduler-addons-199484" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:13:32.708339  491960 pod_ready.go:94] pod "kube-scheduler-addons-199484" is "Ready"
	I1212 00:13:32.708409  491960 pod_ready.go:86] duration metric: took 400.601499ms for pod "kube-scheduler-addons-199484" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 00:13:32.708425  491960 pod_ready.go:40] duration metric: took 1.604693936s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 00:13:32.762174  491960 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1212 00:13:32.765642  491960 out.go:179] * Done! kubectl is now configured to use "addons-199484" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 12 00:13:34 addons-199484 crio[827]: time="2025-12-12T00:13:34.122610965Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:54b40a8ef2da4eeae1346c8bb777c2f7b15d7d1a5aad5a93a4a30b208541d3f1 UID:bcd6e1dc-6dc8-422c-946c-fcbf92362207 NetNS:/var/run/netns/d13d312a-f7a5-43db-a25b-fd7d76f02ea9 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40017be7e0}] Aliases:map[]}"
	Dec 12 00:13:34 addons-199484 crio[827]: time="2025-12-12T00:13:34.122997754Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 12 00:13:34 addons-199484 crio[827]: time="2025-12-12T00:13:34.125900314Z" level=info msg="Ran pod sandbox 54b40a8ef2da4eeae1346c8bb777c2f7b15d7d1a5aad5a93a4a30b208541d3f1 with infra container: default/busybox/POD" id=9a4ffa22-2ca2-4717-a6f4-cc2b4b34067c name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 12 00:13:34 addons-199484 crio[827]: time="2025-12-12T00:13:34.127433696Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=7f338aee-fa39-47e2-9a4a-359bc9eff875 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:13:34 addons-199484 crio[827]: time="2025-12-12T00:13:34.127679155Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=7f338aee-fa39-47e2-9a4a-359bc9eff875 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:13:34 addons-199484 crio[827]: time="2025-12-12T00:13:34.127792056Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=7f338aee-fa39-47e2-9a4a-359bc9eff875 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:13:34 addons-199484 crio[827]: time="2025-12-12T00:13:34.128880582Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=34646469-df70-4c4f-8487-5893d4077631 name=/runtime.v1.ImageService/PullImage
	Dec 12 00:13:34 addons-199484 crio[827]: time="2025-12-12T00:13:34.130480531Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 12 00:13:36 addons-199484 crio[827]: time="2025-12-12T00:13:36.050096524Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=34646469-df70-4c4f-8487-5893d4077631 name=/runtime.v1.ImageService/PullImage
	Dec 12 00:13:36 addons-199484 crio[827]: time="2025-12-12T00:13:36.0508432Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=30e9fc8d-d065-46cf-bdb9-ffdf073a878a name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:13:36 addons-199484 crio[827]: time="2025-12-12T00:13:36.052833803Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=4dd51d65-a7fe-4391-8b78-a58ba22cdf18 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:13:36 addons-199484 crio[827]: time="2025-12-12T00:13:36.060910495Z" level=info msg="Creating container: default/busybox/busybox" id=754b0445-f432-40f3-a8b9-5b1a2f3cf0be name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 00:13:36 addons-199484 crio[827]: time="2025-12-12T00:13:36.061051875Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 00:13:36 addons-199484 crio[827]: time="2025-12-12T00:13:36.070571474Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 00:13:36 addons-199484 crio[827]: time="2025-12-12T00:13:36.071335486Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 00:13:36 addons-199484 crio[827]: time="2025-12-12T00:13:36.089482255Z" level=info msg="Created container bffb40f9c1d3c9eb25923bbc1d25fff1b8f65d4dbb8dd44f45a6cfc824f0ad79: default/busybox/busybox" id=754b0445-f432-40f3-a8b9-5b1a2f3cf0be name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 00:13:36 addons-199484 crio[827]: time="2025-12-12T00:13:36.090527648Z" level=info msg="Starting container: bffb40f9c1d3c9eb25923bbc1d25fff1b8f65d4dbb8dd44f45a6cfc824f0ad79" id=1557f013-7535-4476-8602-88066a6a23a3 name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 00:13:36 addons-199484 crio[827]: time="2025-12-12T00:13:36.09635131Z" level=info msg="Started container" PID=4987 containerID=bffb40f9c1d3c9eb25923bbc1d25fff1b8f65d4dbb8dd44f45a6cfc824f0ad79 description=default/busybox/busybox id=1557f013-7535-4476-8602-88066a6a23a3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=54b40a8ef2da4eeae1346c8bb777c2f7b15d7d1a5aad5a93a4a30b208541d3f1
	Dec 12 00:13:36 addons-199484 crio[827]: time="2025-12-12T00:13:36.884196264Z" level=info msg="Removing container: b621c1edff18438033a28370ea590d71d0376a1c8ee08898ae9ea706e6e0507a" id=820a0ff7-244d-46f7-9fd2-abc681d4d56b name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 12 00:13:36 addons-199484 crio[827]: time="2025-12-12T00:13:36.887378866Z" level=info msg="Error loading conmon cgroup of container b621c1edff18438033a28370ea590d71d0376a1c8ee08898ae9ea706e6e0507a: cgroup deleted" id=820a0ff7-244d-46f7-9fd2-abc681d4d56b name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 12 00:13:36 addons-199484 crio[827]: time="2025-12-12T00:13:36.906276578Z" level=info msg="Removed container b621c1edff18438033a28370ea590d71d0376a1c8ee08898ae9ea706e6e0507a: gcp-auth/gcp-auth-certs-create-tfczt/create" id=820a0ff7-244d-46f7-9fd2-abc681d4d56b name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 12 00:13:36 addons-199484 crio[827]: time="2025-12-12T00:13:36.909851672Z" level=info msg="Stopping pod sandbox: 578af7e03422bda7f12a00d1428d73270eb22183ddfcf9f5fe3e130f78273611" id=ee98d88d-3e71-4449-95ec-956d3f35a09b name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 12 00:13:36 addons-199484 crio[827]: time="2025-12-12T00:13:36.909908762Z" level=info msg="Stopped pod sandbox (already stopped): 578af7e03422bda7f12a00d1428d73270eb22183ddfcf9f5fe3e130f78273611" id=ee98d88d-3e71-4449-95ec-956d3f35a09b name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 12 00:13:36 addons-199484 crio[827]: time="2025-12-12T00:13:36.910496368Z" level=info msg="Removing pod sandbox: 578af7e03422bda7f12a00d1428d73270eb22183ddfcf9f5fe3e130f78273611" id=d6a81c89-8443-4c74-9f25-0ccd41f48a31 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 12 00:13:36 addons-199484 crio[827]: time="2025-12-12T00:13:36.915498451Z" level=info msg="Removed pod sandbox: 578af7e03422bda7f12a00d1428d73270eb22183ddfcf9f5fe3e130f78273611" id=d6a81c89-8443-4c74-9f25-0ccd41f48a31 name=/runtime.v1.RuntimeService/RemovePodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	bffb40f9c1d3c       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          8 seconds ago        Running             busybox                                  0                   54b40a8ef2da4       busybox                                     default
	8cdc6f2f81593       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 14 seconds ago       Running             gcp-auth                                 0                   79ad452890c93       gcp-auth-78565c9fb4-29cjc                   gcp-auth
	0b887da02a72c       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          23 seconds ago       Running             csi-snapshotter                          0                   57b2c259d2d98       csi-hostpathplugin-pwldg                    kube-system
	0104cc6b5dd42       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          24 seconds ago       Running             csi-provisioner                          0                   57b2c259d2d98       csi-hostpathplugin-pwldg                    kube-system
	c47d48a718439       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            26 seconds ago       Running             liveness-probe                           0                   57b2c259d2d98       csi-hostpathplugin-pwldg                    kube-system
	5bff244d59411       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           27 seconds ago       Running             hostpath                                 0                   57b2c259d2d98       csi-hostpathplugin-pwldg                    kube-system
	ef710450ec222       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                28 seconds ago       Running             node-driver-registrar                    0                   57b2c259d2d98       csi-hostpathplugin-pwldg                    kube-system
	58d2c13864228       registry.k8s.io/ingress-nginx/controller@sha256:75494e2145fbebf362d24e24e9285b7fbb7da8783ab272092e3126e24ee4776d                             29 seconds ago       Running             controller                               0                   f523f728238d3       ingress-nginx-controller-85d4c799dd-tm9s4   ingress-nginx
	3ff28337f5829       e8105550077f5c6c8e92536651451107053f0e41635396ee42aef596441c179a                                                                             30 seconds ago       Exited              patch                                    3                   8450cd59672ba       gcp-auth-certs-patch-sdm7m                  gcp-auth
	c609012661b80       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:fadc7bf59b69965b6707edb68022bed4f55a1f99b15f7acd272793e48f171496                            37 seconds ago       Running             gadget                                   0                   ac6a171ca7cd8       gadget-926zk                                gadget
	a10cfd4bcf4a6       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   40 seconds ago       Running             csi-external-health-monitor-controller   0                   57b2c259d2d98       csi-hostpathplugin-pwldg                    kube-system
	7adec1475ef10       nvcr.io/nvidia/k8s-device-plugin@sha256:80924fc52384565a7c59f1e2f12319fb8f2b02a1c974bb3d73a9853fe01af874                                     41 seconds ago       Running             nvidia-device-plugin-ctr                 0                   2d037c44f8a02       nvidia-device-plugin-daemonset-4jhc7        kube-system
	ebe4e7c9ca0fc       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              46 seconds ago       Running             registry-proxy                           0                   1b47233f95609       registry-proxy-rj8pk                        kube-system
	b063d4752b149       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              50 seconds ago       Running             yakd                                     0                   26f1e3f1839a5       yakd-dashboard-5ff678cb9-54gdk              yakd-dashboard
	120bd80c13ba7       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             54 seconds ago       Running             local-path-provisioner                   0                   bdc9f4c8acc57       local-path-provisioner-648f6765c9-jz8vr     local-path-storage
	e42d45fdc95da       e8105550077f5c6c8e92536651451107053f0e41635396ee42aef596441c179a                                                                             55 seconds ago       Exited              patch                                    2                   d38258a9d396a       ingress-nginx-admission-patch-5c76k         ingress-nginx
	c6014c9718ca5       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:c9c1ef89e4bb9d6c9c6c0b5375c3253a0b951e5b731240be20cebe5593de142d                   55 seconds ago       Exited              create                                   0                   758e2f6efb010       ingress-nginx-admission-create-75m2k        ingress-nginx
	aa9c00c762f80       gcr.io/cloud-spanner-emulator/emulator@sha256:daeab9cb1978e02113045625e2633619f465f22aac7638101995f4cd03607170                               55 seconds ago       Running             cloud-spanner-emulator                   0                   ba9216cb127f4       cloud-spanner-emulator-5bdddb765-vnsm7      default
	7859d680677c9       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               59 seconds ago       Running             minikube-ingress-dns                     0                   03fd0053a3ae8       kube-ingress-dns-minikube                   kube-system
	e47e9aabb94d9       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             About a minute ago   Running             csi-attacher                             0                   6e0210b2e75dd       csi-hostpath-attacher-0                     kube-system
	afc99c45439ad       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        About a minute ago   Running             metrics-server                           0                   b9ec3cdd46cca       metrics-server-85b7d694d7-mp4fx             kube-system
	e021d26e2771d       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              About a minute ago   Running             csi-resizer                              0                   9270912a4a449       csi-hostpath-resizer-0                      kube-system
	9da22fcbd3de3       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           About a minute ago   Running             registry                                 0                   4111dda14b5b1       registry-6b586f9694-d69pq                   kube-system
	12d12a2561d73       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   2d7596ebe068b       snapshot-controller-7d9fbc56b8-lxshs        kube-system
	549fae89400cf       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   f31edd0cf94de       snapshot-controller-7d9fbc56b8-p7d72        kube-system
	e4aaf8d36273d       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             About a minute ago   Running             storage-provisioner                      0                   8364a161101b4       storage-provisioner                         kube-system
	be3ca68362678       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             About a minute ago   Running             coredns                                  0                   48d20da50be89       coredns-66bc5c9577-jx5nq                    kube-system
	e251865f884a7       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             2 minutes ago        Running             kindnet-cni                              0                   0169747ae697d       kindnet-5nsn6                               kube-system
	f4dd998c607c5       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786                                                                             2 minutes ago        Running             kube-proxy                               0                   7b1a71049e6f4       kube-proxy-67nfx                            kube-system
	10211afe59632       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7                                                                             2 minutes ago        Running             kube-apiserver                           0                   0370f535c60d1       kube-apiserver-addons-199484                kube-system
	7e478b538e97d       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2                                                                             2 minutes ago        Running             kube-controller-manager                  0                   7c607c3408224       kube-controller-manager-addons-199484       kube-system
	810bdb88faff8       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42                                                                             2 minutes ago        Running             etcd                                     0                   3a252a9aeae97       etcd-addons-199484                          kube-system
	8f971c589eb18       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949                                                                             2 minutes ago        Running             kube-scheduler                           0                   218268e9b0239       kube-scheduler-addons-199484                kube-system
	
	
	==> coredns [be3ca683626781d8cf4bacd424bf231f28a131d46b225751ad657dc8a00878f1] <==
	[INFO] 10.244.0.12:34448 - 28965 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000150528s
	[INFO] 10.244.0.12:34448 - 17484 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.00222296s
	[INFO] 10.244.0.12:34448 - 48660 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002267094s
	[INFO] 10.244.0.12:34448 - 21147 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000159266s
	[INFO] 10.244.0.12:34448 - 42908 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000136802s
	[INFO] 10.244.0.12:35429 - 54993 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000177489s
	[INFO] 10.244.0.12:35429 - 54531 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000100371s
	[INFO] 10.244.0.12:55804 - 14784 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000142126s
	[INFO] 10.244.0.12:55804 - 14348 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000083962s
	[INFO] 10.244.0.12:35948 - 17357 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000110956s
	[INFO] 10.244.0.12:35948 - 16928 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000107755s
	[INFO] 10.244.0.12:48440 - 46169 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.00127048s
	[INFO] 10.244.0.12:48440 - 45956 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001290048s
	[INFO] 10.244.0.12:43487 - 59800 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000116371s
	[INFO] 10.244.0.12:43487 - 59603 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000092305s
	[INFO] 10.244.0.21:51137 - 52159 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000265126s
	[INFO] 10.244.0.21:34101 - 18328 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000380398s
	[INFO] 10.244.0.21:35727 - 58825 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000191463s
	[INFO] 10.244.0.21:59127 - 13889 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000159291s
	[INFO] 10.244.0.21:38882 - 29066 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000149511s
	[INFO] 10.244.0.21:44659 - 23772 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000064491s
	[INFO] 10.244.0.21:51669 - 52108 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002638532s
	[INFO] 10.244.0.21:37979 - 37419 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002927896s
	[INFO] 10.244.0.21:43043 - 18570 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002555235s
	[INFO] 10.244.0.21:43515 - 44500 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.002784227s
	
	
	==> describe nodes <==
	Name:               addons-199484
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-199484
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c04ca15b4c226075dd018d362cd996ac712bf2c0
	                    minikube.k8s.io/name=addons-199484
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_12T00_11_37_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-199484
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-199484"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 12 Dec 2025 00:11:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-199484
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 12 Dec 2025 00:13:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 12 Dec 2025 00:13:39 +0000   Fri, 12 Dec 2025 00:11:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 12 Dec 2025 00:13:39 +0000   Fri, 12 Dec 2025 00:11:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 12 Dec 2025 00:13:39 +0000   Fri, 12 Dec 2025 00:11:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 12 Dec 2025 00:13:39 +0000   Fri, 12 Dec 2025 00:12:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-199484
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 78f85184c267cd52312ad0096937f858
	  System UUID:                9e7dfbbd-1a6e-487c-a48c-31ada1830da5
	  Boot ID:                    cbbb78f6-c2df-4b23-9269-8d5d442bffaa
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (26 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  default                     cloud-spanner-emulator-5bdddb765-vnsm7       0 (0%)        0 (0%)      0 (0%)           0 (0%)         119s
	  gadget                      gadget-926zk                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         118s
	  gcp-auth                    gcp-auth-78565c9fb4-29cjc                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  ingress-nginx               ingress-nginx-controller-85d4c799dd-tm9s4    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         117s
	  kube-system                 coredns-66bc5c9577-jx5nq                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m3s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 csi-hostpathplugin-pwldg                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         82s
	  kube-system                 etcd-addons-199484                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m8s
	  kube-system                 kindnet-5nsn6                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m4s
	  kube-system                 kube-apiserver-addons-199484                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m8s
	  kube-system                 kube-controller-manager-addons-199484        200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m8s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-proxy-67nfx                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 kube-scheduler-addons-199484                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m10s
	  kube-system                 metrics-server-85b7d694d7-mp4fx              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         118s
	  kube-system                 nvidia-device-plugin-daemonset-4jhc7         0 (0%)        0 (0%)      0 (0%)           0 (0%)         82s
	  kube-system                 registry-6b586f9694-d69pq                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 registry-creds-764b6fb674-mf9j5              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 registry-proxy-rj8pk                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         82s
	  kube-system                 snapshot-controller-7d9fbc56b8-lxshs         0 (0%)        0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 snapshot-controller-7d9fbc56b8-p7d72         0 (0%)        0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         119s
	  local-path-storage          local-path-provisioner-648f6765c9-jz8vr      0 (0%)        0 (0%)      0 (0%)           0 (0%)         117s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-54gdk               0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     117s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m2s                   kube-proxy       
	  Warning  CgroupV1                 2m16s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m16s (x8 over 2m16s)  kubelet          Node addons-199484 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m16s (x8 over 2m16s)  kubelet          Node addons-199484 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m16s (x8 over 2m16s)  kubelet          Node addons-199484 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m9s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m9s                   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m8s                   kubelet          Node addons-199484 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m8s                   kubelet          Node addons-199484 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m8s                   kubelet          Node addons-199484 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m4s                   node-controller  Node addons-199484 event: Registered Node addons-199484 in Controller
	  Normal   NodeReady                82s                    kubelet          Node addons-199484 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec11 23:45] hrtimer: interrupt took 13740716 ns
	[Dec12 00:10] kauditd_printk_skb: 8 callbacks suppressed
	[Dec12 00:11] overlayfs: idmapped layers are currently not supported
	[  +0.124336] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [810bdb88faff8bb6f2eca85e10545aa7edde43a7452f29a88bc8f3d2c032b8df] <==
	{"level":"warn","ts":"2025-12-12T00:11:32.449168Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43622","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:11:32.470990Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:11:32.531758Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:11:32.561321Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:11:32.598536Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:11:32.622486Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:11:32.663599Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:11:32.682782Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43736","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:11:32.711568Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:11:32.771454Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:11:32.787007Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:11:32.818700Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:11:32.849964Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:11:32.872149Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:11:32.913586Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:11:32.949657Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:11:32.982547Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:11:33.006907Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:11:33.197473Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:11:49.619554Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:11:49.623175Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:12:11.127795Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:12:11.144333Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:12:11.191870Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T00:12:11.226430Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37268","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [8cdc6f2f81593a6954c230601978343429395ab486dc6f876c12478dc8cfe38d] <==
	2025/12/12 00:13:30 GCP Auth Webhook started!
	2025/12/12 00:13:33 Ready to marshal response ...
	2025/12/12 00:13:33 Ready to write response ...
	2025/12/12 00:13:33 Ready to marshal response ...
	2025/12/12 00:13:33 Ready to write response ...
	2025/12/12 00:13:33 Ready to marshal response ...
	2025/12/12 00:13:33 Ready to write response ...
	
	
	==> kernel <==
	 00:13:45 up  2:56,  0 user,  load average: 3.69, 2.20, 1.97
	Linux addons-199484 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e251865f884a70fae76b65618090dc9e6abcf3315601089443dc5fb1bd026fb1] <==
	E1212 00:12:12.712844       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1212 00:12:12.712962       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1212 00:12:12.713837       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1212 00:12:12.713885       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1212 00:12:14.312806       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1212 00:12:14.312861       1 metrics.go:72] Registering metrics
	I1212 00:12:14.312935       1 controller.go:711] "Syncing nftables rules"
	I1212 00:12:22.718088       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:12:22.718147       1 main.go:301] handling current node
	I1212 00:12:32.712357       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:12:32.712394       1 main.go:301] handling current node
	I1212 00:12:42.714746       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:12:42.714817       1 main.go:301] handling current node
	I1212 00:12:52.712677       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:12:52.712715       1 main.go:301] handling current node
	I1212 00:13:02.712898       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:13:02.712929       1 main.go:301] handling current node
	I1212 00:13:12.712325       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:13:12.712379       1 main.go:301] handling current node
	I1212 00:13:22.712372       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:13:22.712493       1 main.go:301] handling current node
	I1212 00:13:32.712841       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:13:32.712899       1 main.go:301] handling current node
	I1212 00:13:42.714785       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:13:42.714828       1 main.go:301] handling current node
	
	
	==> kube-apiserver [10211afe59632799435b4008dd96430e1edb4a1cc399809c32273577dfd7cd61] <==
	I1212 00:11:49.113334       1 controller.go:667] quota admission added evaluator for: statefulsets.apps
	I1212 00:11:49.186449       1 alloc.go:328] "allocated clusterIPs" service="kube-system/csi-hostpath-resizer" clusterIPs={"IPv4":"10.110.184.183"}
	W1212 00:11:49.607714       1 logging.go:55] [core] [Channel #259 SubChannel #260]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1212 00:11:49.623014       1 logging.go:55] [core] [Channel #263 SubChannel #264]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	I1212 00:11:52.441587       1 alloc.go:328] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.107.252.137"}
	W1212 00:12:11.127749       1 logging.go:55] [core] [Channel #270 SubChannel #271]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1212 00:12:11.144344       1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1212 00:12:11.190801       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1212 00:12:11.216191       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1212 00:12:23.248819       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.107.252.137:443: connect: connection refused
	E1212 00:12:23.248867       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.107.252.137:443: connect: connection refused" logger="UnhandledError"
	W1212 00:12:23.249882       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.107.252.137:443: connect: connection refused
	E1212 00:12:23.249972       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.107.252.137:443: connect: connection refused" logger="UnhandledError"
	W1212 00:12:23.371517       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.107.252.137:443: connect: connection refused
	E1212 00:12:23.371683       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.107.252.137:443: connect: connection refused" logger="UnhandledError"
	E1212 00:12:46.459346       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.90.105:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.110.90.105:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.110.90.105:443: connect: connection refused" logger="UnhandledError"
	W1212 00:12:46.459531       1 handler_proxy.go:99] no RequestInfo found in the context
	E1212 00:12:46.459584       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1212 00:12:46.546636       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1212 00:13:42.761098       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:58638: use of closed network connection
	E1212 00:13:43.046633       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:58670: use of closed network connection
	E1212 00:13:43.190024       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:58680: use of closed network connection
	
	
	==> kube-controller-manager [7e478b538e97db66e0de68ed3ade2ff6d3d2420a89b4bad65e8158d500e16aae] <==
	I1212 00:11:41.112252       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1212 00:11:41.113046       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1212 00:11:41.113162       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1212 00:11:41.116185       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1212 00:11:41.120801       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1212 00:11:41.129059       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1212 00:11:41.131209       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1212 00:11:41.133456       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1212 00:11:41.143039       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1212 00:11:41.144110       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1212 00:11:41.147445       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1212 00:11:41.149707       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1212 00:11:41.154966       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1212 00:11:47.469119       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1212 00:11:47.495077       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1212 00:12:11.121132       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1212 00:12:11.121289       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1212 00:12:11.121342       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1212 00:12:11.173463       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1212 00:12:11.182443       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1212 00:12:11.222802       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1212 00:12:11.282947       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1212 00:12:26.104244       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1212 00:12:41.228259       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1212 00:12:41.291354       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [f4dd998c607c5f8351f4c10ea768def06e8e2defafffafca5fe3876d98d9b123] <==
	I1212 00:11:42.421948       1 server_linux.go:53] "Using iptables proxy"
	I1212 00:11:42.546170       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1212 00:11:42.646270       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1212 00:11:42.646303       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1212 00:11:42.646377       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1212 00:11:42.723153       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1212 00:11:42.723287       1 server_linux.go:132] "Using iptables Proxier"
	I1212 00:11:42.819490       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1212 00:11:42.821280       1 server.go:527] "Version info" version="v1.34.2"
	I1212 00:11:42.825284       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 00:11:42.855018       1 config.go:106] "Starting endpoint slice config controller"
	I1212 00:11:42.855047       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1212 00:11:42.855465       1 config.go:200] "Starting service config controller"
	I1212 00:11:42.855472       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1212 00:11:42.855808       1 config.go:403] "Starting serviceCIDR config controller"
	I1212 00:11:42.855815       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1212 00:11:42.856229       1 config.go:309] "Starting node config controller"
	I1212 00:11:42.856235       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1212 00:11:42.856241       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1212 00:11:42.961256       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1212 00:11:42.961293       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1212 00:11:42.961310       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [8f971c589eb18130d181fe2c7aa31da3304b9d3a3c2f5c74aa810a8426636a2a] <==
	I1212 00:11:35.024356       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 00:11:35.027385       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1212 00:11:35.027532       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1212 00:11:35.027556       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1212 00:11:35.027574       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1212 00:11:35.032029       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1212 00:11:35.038575       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1212 00:11:35.040154       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1212 00:11:35.040309       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1212 00:11:35.040565       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1212 00:11:35.040693       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1212 00:11:35.040934       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1212 00:11:35.041055       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1212 00:11:35.041196       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1212 00:11:35.041308       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1212 00:11:35.041443       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1212 00:11:35.041542       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1212 00:11:35.041643       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1212 00:11:35.041752       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1212 00:11:35.041879       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1212 00:11:35.042004       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1212 00:11:35.042250       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1212 00:11:35.042454       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1212 00:11:35.042522       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	I1212 00:11:36.627910       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 12 00:13:04 addons-199484 kubelet[1285]: I1212 00:13:04.560565    1285 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-4jhc7" secret="" err="secret \"gcp-auth\" not found"
	Dec 12 00:13:06 addons-199484 kubelet[1285]: I1212 00:13:06.559729    1285 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/nvidia-device-plugin-daemonset-4jhc7" podStartSLOduration=4.978921489 podStartE2EDuration="43.559707702s" podCreationTimestamp="2025-12-12 00:12:23 +0000 UTC" firstStartedPulling="2025-12-12 00:12:24.345660134 +0000 UTC m=+47.667339893" lastFinishedPulling="2025-12-12 00:13:02.926446265 +0000 UTC m=+86.248126106" observedRunningTime="2025-12-12 00:13:03.575965099 +0000 UTC m=+86.897644866" watchObservedRunningTime="2025-12-12 00:13:06.559707702 +0000 UTC m=+89.881387469"
	Dec 12 00:13:08 addons-199484 kubelet[1285]: I1212 00:13:08.621304    1285 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gadget/gadget-926zk" podStartSLOduration=66.703883309 podStartE2EDuration="1m21.621285338s" podCreationTimestamp="2025-12-12 00:11:47 +0000 UTC" firstStartedPulling="2025-12-12 00:12:52.884397606 +0000 UTC m=+76.206077365" lastFinishedPulling="2025-12-12 00:13:07.801799602 +0000 UTC m=+91.123479394" observedRunningTime="2025-12-12 00:13:08.620660556 +0000 UTC m=+91.942340356" watchObservedRunningTime="2025-12-12 00:13:08.621285338 +0000 UTC m=+91.942965105"
	Dec 12 00:13:11 addons-199484 kubelet[1285]: I1212 00:13:11.812830    1285 scope.go:117] "RemoveContainer" containerID="72aa085e29ecaf0446bb51ac0f3c52f28d92bc110513f67502f6e03578ceefad"
	Dec 12 00:13:15 addons-199484 kubelet[1285]: I1212 00:13:15.659477    1285 scope.go:117] "RemoveContainer" containerID="72aa085e29ecaf0446bb51ac0f3c52f28d92bc110513f67502f6e03578ceefad"
	Dec 12 00:13:16 addons-199484 kubelet[1285]: I1212 00:13:16.700250    1285 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-85d4c799dd-tm9s4" podStartSLOduration=69.121067999 podStartE2EDuration="1m28.70021718s" podCreationTimestamp="2025-12-12 00:11:48 +0000 UTC" firstStartedPulling="2025-12-12 00:12:55.491398677 +0000 UTC m=+78.813078436" lastFinishedPulling="2025-12-12 00:13:15.07054785 +0000 UTC m=+98.392227617" observedRunningTime="2025-12-12 00:13:15.713837909 +0000 UTC m=+99.035517742" watchObservedRunningTime="2025-12-12 00:13:16.70021718 +0000 UTC m=+100.021896939"
	Dec 12 00:13:16 addons-199484 kubelet[1285]: I1212 00:13:16.836797    1285 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jpkgf\" (UniqueName: \"kubernetes.io/projected/abb7f9b4-0bf1-4862-a813-f9e34ad9c084-kube-api-access-jpkgf\") pod \"abb7f9b4-0bf1-4862-a813-f9e34ad9c084\" (UID: \"abb7f9b4-0bf1-4862-a813-f9e34ad9c084\") "
	Dec 12 00:13:16 addons-199484 kubelet[1285]: I1212 00:13:16.849480    1285 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abb7f9b4-0bf1-4862-a813-f9e34ad9c084-kube-api-access-jpkgf" (OuterVolumeSpecName: "kube-api-access-jpkgf") pod "abb7f9b4-0bf1-4862-a813-f9e34ad9c084" (UID: "abb7f9b4-0bf1-4862-a813-f9e34ad9c084"). InnerVolumeSpecName "kube-api-access-jpkgf". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 12 00:13:16 addons-199484 kubelet[1285]: I1212 00:13:16.938174    1285 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jpkgf\" (UniqueName: \"kubernetes.io/projected/abb7f9b4-0bf1-4862-a813-f9e34ad9c084-kube-api-access-jpkgf\") on node \"addons-199484\" DevicePath \"\""
	Dec 12 00:13:17 addons-199484 kubelet[1285]: I1212 00:13:17.697165    1285 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8450cd59672ba744390c5f5739439483d928bd18f0a561461387324951f8dc6f"
	Dec 12 00:13:19 addons-199484 kubelet[1285]: I1212 00:13:19.117371    1285 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Dec 12 00:13:19 addons-199484 kubelet[1285]: I1212 00:13:19.117417    1285 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Dec 12 00:13:26 addons-199484 kubelet[1285]: I1212 00:13:26.713431    1285 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-pwldg" podStartSLOduration=6.459519443 podStartE2EDuration="1m3.713411379s" podCreationTimestamp="2025-12-12 00:12:23 +0000 UTC" firstStartedPulling="2025-12-12 00:12:24.345703465 +0000 UTC m=+47.667383224" lastFinishedPulling="2025-12-12 00:13:21.599595393 +0000 UTC m=+104.921275160" observedRunningTime="2025-12-12 00:13:21.763306195 +0000 UTC m=+105.084985987" watchObservedRunningTime="2025-12-12 00:13:26.713411379 +0000 UTC m=+110.035091146"
	Dec 12 00:13:27 addons-199484 kubelet[1285]: E1212 00:13:27.215838    1285 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Dec 12 00:13:27 addons-199484 kubelet[1285]: E1212 00:13:27.216071    1285 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7fa7165f-4d24-4504-b416-3d9ed68b1a7f-gcr-creds podName:7fa7165f-4d24-4504-b416-3d9ed68b1a7f nodeName:}" failed. No retries permitted until 2025-12-12 00:14:31.216052343 +0000 UTC m=+174.537732110 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/7fa7165f-4d24-4504-b416-3d9ed68b1a7f-gcr-creds") pod "registry-creds-764b6fb674-mf9j5" (UID: "7fa7165f-4d24-4504-b416-3d9ed68b1a7f") : secret "registry-creds-gcr" not found
	Dec 12 00:13:31 addons-199484 kubelet[1285]: I1212 00:13:31.062470    1285 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-78565c9fb4-29cjc" podStartSLOduration=95.953697461 podStartE2EDuration="1m39.062449555s" podCreationTimestamp="2025-12-12 00:11:52 +0000 UTC" firstStartedPulling="2025-12-12 00:13:27.501349285 +0000 UTC m=+110.823029044" lastFinishedPulling="2025-12-12 00:13:30.610101371 +0000 UTC m=+113.931781138" observedRunningTime="2025-12-12 00:13:30.795645421 +0000 UTC m=+114.117325180" watchObservedRunningTime="2025-12-12 00:13:31.062449555 +0000 UTC m=+114.384129314"
	Dec 12 00:13:32 addons-199484 kubelet[1285]: I1212 00:13:32.817117    1285 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="721000b4-0cca-45e1-b172-4c27c500ce50" path="/var/lib/kubelet/pods/721000b4-0cca-45e1-b172-4c27c500ce50/volumes"
	Dec 12 00:13:33 addons-199484 kubelet[1285]: I1212 00:13:33.669061    1285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fl9dl\" (UniqueName: \"kubernetes.io/projected/bcd6e1dc-6dc8-422c-946c-fcbf92362207-kube-api-access-fl9dl\") pod \"busybox\" (UID: \"bcd6e1dc-6dc8-422c-946c-fcbf92362207\") " pod="default/busybox"
	Dec 12 00:13:33 addons-199484 kubelet[1285]: I1212 00:13:33.669305    1285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/bcd6e1dc-6dc8-422c-946c-fcbf92362207-gcp-creds\") pod \"busybox\" (UID: \"bcd6e1dc-6dc8-422c-946c-fcbf92362207\") " pod="default/busybox"
	Dec 12 00:13:36 addons-199484 kubelet[1285]: I1212 00:13:36.832970    1285 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.9093411850000002 podStartE2EDuration="3.832953056s" podCreationTimestamp="2025-12-12 00:13:33 +0000 UTC" firstStartedPulling="2025-12-12 00:13:34.12820026 +0000 UTC m=+117.449880027" lastFinishedPulling="2025-12-12 00:13:36.051812139 +0000 UTC m=+119.373491898" observedRunningTime="2025-12-12 00:13:36.831564467 +0000 UTC m=+120.153244225" watchObservedRunningTime="2025-12-12 00:13:36.832953056 +0000 UTC m=+120.154632823"
	Dec 12 00:13:36 addons-199484 kubelet[1285]: I1212 00:13:36.882756    1285 scope.go:117] "RemoveContainer" containerID="b621c1edff18438033a28370ea590d71d0376a1c8ee08898ae9ea706e6e0507a"
	Dec 12 00:13:36 addons-199484 kubelet[1285]: E1212 00:13:36.977512    1285 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/14ca2166c50c5729d361b96f49df4f207bf3937bae2b9d2e3e23ae08a745517a/diff" to get inode usage: stat /var/lib/containers/storage/overlay/14ca2166c50c5729d361b96f49df4f207bf3937bae2b9d2e3e23ae08a745517a/diff: no such file or directory, extraDiskErr: <nil>
	Dec 12 00:13:36 addons-199484 kubelet[1285]: E1212 00:13:36.979610    1285 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/690225ed0d1cd7c8bdc490ae1f02f887d2e937646fc23960f2fb1a8fde199566/diff" to get inode usage: stat /var/lib/containers/storage/overlay/690225ed0d1cd7c8bdc490ae1f02f887d2e937646fc23960f2fb1a8fde199566/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/ingress-nginx_ingress-nginx-admission-patch-5c76k_939a266e-953b-44e1-8395-a52aabe227bf/patch/1.log" to get inode usage: stat /var/log/pods/ingress-nginx_ingress-nginx-admission-patch-5c76k_939a266e-953b-44e1-8395-a52aabe227bf/patch/1.log: no such file or directory
	Dec 12 00:13:43 addons-199484 kubelet[1285]: E1212 00:13:43.191573    1285 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 127.0.0.1:45358->127.0.0.1:35553: read tcp 127.0.0.1:45358->127.0.0.1:35553: read: connection reset by peer
	Dec 12 00:13:43 addons-199484 kubelet[1285]: E1212 00:13:43.191750    1285 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:45358->127.0.0.1:35553: write tcp 127.0.0.1:45358->127.0.0.1:35553: write: broken pipe
	
	
	==> storage-provisioner [e4aaf8d36273d3acde491a8cd14406fcfbfeebc26d9e26894b4170e98f011d9a] <==
	W1212 00:13:20.826722       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:13:22.830611       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:13:22.835356       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:13:24.839416       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:13:24.845357       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:13:26.848929       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:13:26.853466       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:13:28.856835       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:13:28.865097       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:13:30.868280       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:13:30.875084       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:13:32.878903       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:13:32.888233       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:13:34.892586       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:13:34.897306       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:13:36.900776       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:13:36.905624       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:13:38.908347       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:13:38.912999       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:13:40.916516       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:13:40.923162       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:13:42.926579       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:13:42.935561       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:13:44.939595       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 00:13:44.946846       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-199484 -n addons-199484
helpers_test.go:270: (dbg) Run:  kubectl --context addons-199484 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: gcp-auth-certs-patch-sdm7m ingress-nginx-admission-create-75m2k ingress-nginx-admission-patch-5c76k registry-creds-764b6fb674-mf9j5
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context addons-199484 describe pod gcp-auth-certs-patch-sdm7m ingress-nginx-admission-create-75m2k ingress-nginx-admission-patch-5c76k registry-creds-764b6fb674-mf9j5
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-199484 describe pod gcp-auth-certs-patch-sdm7m ingress-nginx-admission-create-75m2k ingress-nginx-admission-patch-5c76k registry-creds-764b6fb674-mf9j5: exit status 1 (92.974796ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "gcp-auth-certs-patch-sdm7m" not found
	Error from server (NotFound): pods "ingress-nginx-admission-create-75m2k" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-5c76k" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-mf9j5" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context addons-199484 describe pod gcp-auth-certs-patch-sdm7m ingress-nginx-admission-create-75m2k ingress-nginx-admission-patch-5c76k registry-creds-764b6fb674-mf9j5: exit status 1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-199484 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-199484 addons disable headlamp --alsologtostderr -v=1: exit status 11 (265.055115ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 00:13:46.595152  498541 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:13:46.596245  498541 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:13:46.596262  498541 out.go:374] Setting ErrFile to fd 2...
	I1212 00:13:46.596268  498541 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:13:46.596522  498541 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-487723/.minikube/bin
	I1212 00:13:46.596831  498541 mustload.go:66] Loading cluster: addons-199484
	I1212 00:13:46.597215  498541 config.go:182] Loaded profile config "addons-199484": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:13:46.597233  498541 addons.go:622] checking whether the cluster is paused
	I1212 00:13:46.597342  498541 config.go:182] Loaded profile config "addons-199484": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:13:46.597356  498541 host.go:66] Checking if "addons-199484" exists ...
	I1212 00:13:46.599081  498541 cli_runner.go:164] Run: docker container inspect addons-199484 --format={{.State.Status}}
	I1212 00:13:46.617150  498541 ssh_runner.go:195] Run: systemctl --version
	I1212 00:13:46.617213  498541 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199484
	I1212 00:13:46.635563  498541 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/addons-199484/id_rsa Username:docker}
	I1212 00:13:46.737331  498541 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 00:13:46.737431  498541 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 00:13:46.769002  498541 cri.go:89] found id: "0b887da02a72ca5e9db931fa0b78ed0c10baf6d179621e8b3b144a117a55809d"
	I1212 00:13:46.769026  498541 cri.go:89] found id: "0104cc6b5dd424933a9226ae25ad89956edefad068d133eb18cfe430e71b64ac"
	I1212 00:13:46.769031  498541 cri.go:89] found id: "c47d48a718439532bd951ca0cdfae6827283df3ae31939423aeb7181a556e753"
	I1212 00:13:46.769035  498541 cri.go:89] found id: "5bff244d59411c9a273c0a7b1039b628bff021a4fcc767ce6393c52e86beb8eb"
	I1212 00:13:46.769039  498541 cri.go:89] found id: "ef710450ec222b3bb9dce827f650fe3b3d671f8886111935c0833ebaf845350b"
	I1212 00:13:46.769043  498541 cri.go:89] found id: "a10cfd4bcf4a619b3dd7e7bec66661fc97cca6bb81d6d42138f98dd802da82b5"
	I1212 00:13:46.769047  498541 cri.go:89] found id: "7adec1475ef10259911007f4aed32a65a52312ba2c8d26c991fc9f115e2afc7e"
	I1212 00:13:46.769050  498541 cri.go:89] found id: "ebe4e7c9ca0fc77879706813f9313a141bda416c1327e75bfc10b883dde9afe7"
	I1212 00:13:46.769055  498541 cri.go:89] found id: "7859d680677c9320a6b97dc99f20c809caf1cf0a0e02a9680dd377acc63b6976"
	I1212 00:13:46.769063  498541 cri.go:89] found id: "e47e9aabb94d9f6577691b06bc3594ad26b704954d37c8f5750e1b8ae813479b"
	I1212 00:13:46.769067  498541 cri.go:89] found id: "afc99c45439ad0e7765f5c4c99793d9a7abbbc0fbe4ee7679500e1d0d406f9cc"
	I1212 00:13:46.769071  498541 cri.go:89] found id: "e021d26e2771d4b836e3c2ff9a3c8340d8a7f191d6258abe1cbcbd9602298f76"
	I1212 00:13:46.769075  498541 cri.go:89] found id: "9da22fcbd3de39035ac1a03e1a791e6c51948745f63a0cd5880aee271a0b93c4"
	I1212 00:13:46.769078  498541 cri.go:89] found id: "12d12a2561d73ca125841a672690c99e84b5fb54f11b8b04257c2e1ab7f1a247"
	I1212 00:13:46.769082  498541 cri.go:89] found id: "549fae89400cf4efbe803ae4aa702097163dfc5e9a131def0ae6bbecb4c0601e"
	I1212 00:13:46.769095  498541 cri.go:89] found id: "e4aaf8d36273d3acde491a8cd14406fcfbfeebc26d9e26894b4170e98f011d9a"
	I1212 00:13:46.769102  498541 cri.go:89] found id: "be3ca683626781d8cf4bacd424bf231f28a131d46b225751ad657dc8a00878f1"
	I1212 00:13:46.769108  498541 cri.go:89] found id: "e251865f884a70fae76b65618090dc9e6abcf3315601089443dc5fb1bd026fb1"
	I1212 00:13:46.769111  498541 cri.go:89] found id: "f4dd998c607c5f8351f4c10ea768def06e8e2defafffafca5fe3876d98d9b123"
	I1212 00:13:46.769114  498541 cri.go:89] found id: "10211afe59632799435b4008dd96430e1edb4a1cc399809c32273577dfd7cd61"
	I1212 00:13:46.769119  498541 cri.go:89] found id: "7e478b538e97db66e0de68ed3ade2ff6d3d2420a89b4bad65e8158d500e16aae"
	I1212 00:13:46.769122  498541 cri.go:89] found id: "810bdb88faff8bb6f2eca85e10545aa7edde43a7452f29a88bc8f3d2c032b8df"
	I1212 00:13:46.769125  498541 cri.go:89] found id: "8f971c589eb18130d181fe2c7aa31da3304b9d3a3c2f5c74aa810a8426636a2a"
	I1212 00:13:46.769129  498541 cri.go:89] found id: ""
	I1212 00:13:46.769180  498541 ssh_runner.go:195] Run: sudo runc list -f json
	I1212 00:13:46.786244  498541 out.go:203] 
	W1212 00:13:46.789045  498541 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T00:13:46Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T00:13:46Z" level=error msg="open /run/runc: no such file or directory"
	
	W1212 00:13:46.789084  498541 out.go:285] * 
	* 
	W1212 00:13:46.795540  498541 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 00:13:46.798386  498541 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable headlamp addon: args "out/minikube-linux-arm64 -p addons-199484 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (3.35s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.28s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-5bdddb765-vnsm7" [4e683ee4-7d24-486c-bd86-becb398c2f69] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003539818s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-199484 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-199484 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (268.767377ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 00:14:04.541473  498978 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:14:04.542302  498978 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:14:04.542315  498978 out.go:374] Setting ErrFile to fd 2...
	I1212 00:14:04.542320  498978 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:14:04.542609  498978 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-487723/.minikube/bin
	I1212 00:14:04.542990  498978 mustload.go:66] Loading cluster: addons-199484
	I1212 00:14:04.543500  498978 config.go:182] Loaded profile config "addons-199484": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:14:04.543522  498978 addons.go:622] checking whether the cluster is paused
	I1212 00:14:04.543644  498978 config.go:182] Loaded profile config "addons-199484": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:14:04.543657  498978 host.go:66] Checking if "addons-199484" exists ...
	I1212 00:14:04.544201  498978 cli_runner.go:164] Run: docker container inspect addons-199484 --format={{.State.Status}}
	I1212 00:14:04.565543  498978 ssh_runner.go:195] Run: systemctl --version
	I1212 00:14:04.565614  498978 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199484
	I1212 00:14:04.583563  498978 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/addons-199484/id_rsa Username:docker}
	I1212 00:14:04.693817  498978 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 00:14:04.693913  498978 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 00:14:04.727064  498978 cri.go:89] found id: "0b887da02a72ca5e9db931fa0b78ed0c10baf6d179621e8b3b144a117a55809d"
	I1212 00:14:04.727087  498978 cri.go:89] found id: "0104cc6b5dd424933a9226ae25ad89956edefad068d133eb18cfe430e71b64ac"
	I1212 00:14:04.727092  498978 cri.go:89] found id: "c47d48a718439532bd951ca0cdfae6827283df3ae31939423aeb7181a556e753"
	I1212 00:14:04.727098  498978 cri.go:89] found id: "5bff244d59411c9a273c0a7b1039b628bff021a4fcc767ce6393c52e86beb8eb"
	I1212 00:14:04.727102  498978 cri.go:89] found id: "ef710450ec222b3bb9dce827f650fe3b3d671f8886111935c0833ebaf845350b"
	I1212 00:14:04.727105  498978 cri.go:89] found id: "a10cfd4bcf4a619b3dd7e7bec66661fc97cca6bb81d6d42138f98dd802da82b5"
	I1212 00:14:04.727109  498978 cri.go:89] found id: "7adec1475ef10259911007f4aed32a65a52312ba2c8d26c991fc9f115e2afc7e"
	I1212 00:14:04.727112  498978 cri.go:89] found id: "ebe4e7c9ca0fc77879706813f9313a141bda416c1327e75bfc10b883dde9afe7"
	I1212 00:14:04.727115  498978 cri.go:89] found id: "7859d680677c9320a6b97dc99f20c809caf1cf0a0e02a9680dd377acc63b6976"
	I1212 00:14:04.727121  498978 cri.go:89] found id: "e47e9aabb94d9f6577691b06bc3594ad26b704954d37c8f5750e1b8ae813479b"
	I1212 00:14:04.727124  498978 cri.go:89] found id: "afc99c45439ad0e7765f5c4c99793d9a7abbbc0fbe4ee7679500e1d0d406f9cc"
	I1212 00:14:04.727127  498978 cri.go:89] found id: "e021d26e2771d4b836e3c2ff9a3c8340d8a7f191d6258abe1cbcbd9602298f76"
	I1212 00:14:04.727131  498978 cri.go:89] found id: "9da22fcbd3de39035ac1a03e1a791e6c51948745f63a0cd5880aee271a0b93c4"
	I1212 00:14:04.727134  498978 cri.go:89] found id: "12d12a2561d73ca125841a672690c99e84b5fb54f11b8b04257c2e1ab7f1a247"
	I1212 00:14:04.727138  498978 cri.go:89] found id: "549fae89400cf4efbe803ae4aa702097163dfc5e9a131def0ae6bbecb4c0601e"
	I1212 00:14:04.727143  498978 cri.go:89] found id: "e4aaf8d36273d3acde491a8cd14406fcfbfeebc26d9e26894b4170e98f011d9a"
	I1212 00:14:04.727146  498978 cri.go:89] found id: "be3ca683626781d8cf4bacd424bf231f28a131d46b225751ad657dc8a00878f1"
	I1212 00:14:04.727156  498978 cri.go:89] found id: "e251865f884a70fae76b65618090dc9e6abcf3315601089443dc5fb1bd026fb1"
	I1212 00:14:04.727159  498978 cri.go:89] found id: "f4dd998c607c5f8351f4c10ea768def06e8e2defafffafca5fe3876d98d9b123"
	I1212 00:14:04.727162  498978 cri.go:89] found id: "10211afe59632799435b4008dd96430e1edb4a1cc399809c32273577dfd7cd61"
	I1212 00:14:04.727167  498978 cri.go:89] found id: "7e478b538e97db66e0de68ed3ade2ff6d3d2420a89b4bad65e8158d500e16aae"
	I1212 00:14:04.727176  498978 cri.go:89] found id: "810bdb88faff8bb6f2eca85e10545aa7edde43a7452f29a88bc8f3d2c032b8df"
	I1212 00:14:04.727180  498978 cri.go:89] found id: "8f971c589eb18130d181fe2c7aa31da3304b9d3a3c2f5c74aa810a8426636a2a"
	I1212 00:14:04.727182  498978 cri.go:89] found id: ""
	I1212 00:14:04.727240  498978 ssh_runner.go:195] Run: sudo runc list -f json
	I1212 00:14:04.743619  498978 out.go:203] 
	W1212 00:14:04.746781  498978 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T00:14:04Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T00:14:04Z" level=error msg="open /run/runc: no such file or directory"
	
	W1212 00:14:04.746816  498978 out.go:285] * 
	* 
	W1212 00:14:04.753359  498978 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 00:14:04.756332  498978 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable cloud-spanner addon: args "out/minikube-linux-arm64 -p addons-199484 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.28s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (9.45s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-199484 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-199484 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-199484 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-199484 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-199484 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-199484 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-199484 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-199484 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:353: "test-local-path" [651e5a62-fd66-4b8e-97c8-755fc67e6f37] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "test-local-path" [651e5a62-fd66-4b8e-97c8-755fc67e6f37] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "test-local-path" [651e5a62-fd66-4b8e-97c8-755fc67e6f37] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.004152198s
addons_test.go:969: (dbg) Run:  kubectl --context addons-199484 get pvc test-pvc -o=json
addons_test.go:978: (dbg) Run:  out/minikube-linux-arm64 -p addons-199484 ssh "cat /opt/local-path-provisioner/pvc-94ef3571-46f0-4f3c-928f-9c7893519f68_default_test-pvc/file1"
addons_test.go:990: (dbg) Run:  kubectl --context addons-199484 delete pod test-local-path
addons_test.go:994: (dbg) Run:  kubectl --context addons-199484 delete pvc test-pvc
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-199484 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-199484 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (280.597362ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 00:14:08.611422  499178 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:14:08.612291  499178 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:14:08.612329  499178 out.go:374] Setting ErrFile to fd 2...
	I1212 00:14:08.612350  499178 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:14:08.612642  499178 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-487723/.minikube/bin
	I1212 00:14:08.612959  499178 mustload.go:66] Loading cluster: addons-199484
	I1212 00:14:08.613365  499178 config.go:182] Loaded profile config "addons-199484": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:14:08.613410  499178 addons.go:622] checking whether the cluster is paused
	I1212 00:14:08.613546  499178 config.go:182] Loaded profile config "addons-199484": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:14:08.613581  499178 host.go:66] Checking if "addons-199484" exists ...
	I1212 00:14:08.614131  499178 cli_runner.go:164] Run: docker container inspect addons-199484 --format={{.State.Status}}
	I1212 00:14:08.631675  499178 ssh_runner.go:195] Run: systemctl --version
	I1212 00:14:08.631729  499178 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199484
	I1212 00:14:08.655602  499178 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/addons-199484/id_rsa Username:docker}
	I1212 00:14:08.765414  499178 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 00:14:08.765509  499178 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 00:14:08.799747  499178 cri.go:89] found id: "0b887da02a72ca5e9db931fa0b78ed0c10baf6d179621e8b3b144a117a55809d"
	I1212 00:14:08.799771  499178 cri.go:89] found id: "0104cc6b5dd424933a9226ae25ad89956edefad068d133eb18cfe430e71b64ac"
	I1212 00:14:08.799776  499178 cri.go:89] found id: "c47d48a718439532bd951ca0cdfae6827283df3ae31939423aeb7181a556e753"
	I1212 00:14:08.799780  499178 cri.go:89] found id: "5bff244d59411c9a273c0a7b1039b628bff021a4fcc767ce6393c52e86beb8eb"
	I1212 00:14:08.799784  499178 cri.go:89] found id: "ef710450ec222b3bb9dce827f650fe3b3d671f8886111935c0833ebaf845350b"
	I1212 00:14:08.799788  499178 cri.go:89] found id: "a10cfd4bcf4a619b3dd7e7bec66661fc97cca6bb81d6d42138f98dd802da82b5"
	I1212 00:14:08.799791  499178 cri.go:89] found id: "7adec1475ef10259911007f4aed32a65a52312ba2c8d26c991fc9f115e2afc7e"
	I1212 00:14:08.799794  499178 cri.go:89] found id: "ebe4e7c9ca0fc77879706813f9313a141bda416c1327e75bfc10b883dde9afe7"
	I1212 00:14:08.799797  499178 cri.go:89] found id: "7859d680677c9320a6b97dc99f20c809caf1cf0a0e02a9680dd377acc63b6976"
	I1212 00:14:08.799804  499178 cri.go:89] found id: "e47e9aabb94d9f6577691b06bc3594ad26b704954d37c8f5750e1b8ae813479b"
	I1212 00:14:08.799815  499178 cri.go:89] found id: "afc99c45439ad0e7765f5c4c99793d9a7abbbc0fbe4ee7679500e1d0d406f9cc"
	I1212 00:14:08.799824  499178 cri.go:89] found id: "e021d26e2771d4b836e3c2ff9a3c8340d8a7f191d6258abe1cbcbd9602298f76"
	I1212 00:14:08.799828  499178 cri.go:89] found id: "9da22fcbd3de39035ac1a03e1a791e6c51948745f63a0cd5880aee271a0b93c4"
	I1212 00:14:08.799831  499178 cri.go:89] found id: "12d12a2561d73ca125841a672690c99e84b5fb54f11b8b04257c2e1ab7f1a247"
	I1212 00:14:08.799834  499178 cri.go:89] found id: "549fae89400cf4efbe803ae4aa702097163dfc5e9a131def0ae6bbecb4c0601e"
	I1212 00:14:08.799839  499178 cri.go:89] found id: "e4aaf8d36273d3acde491a8cd14406fcfbfeebc26d9e26894b4170e98f011d9a"
	I1212 00:14:08.799841  499178 cri.go:89] found id: "be3ca683626781d8cf4bacd424bf231f28a131d46b225751ad657dc8a00878f1"
	I1212 00:14:08.799851  499178 cri.go:89] found id: "e251865f884a70fae76b65618090dc9e6abcf3315601089443dc5fb1bd026fb1"
	I1212 00:14:08.799855  499178 cri.go:89] found id: "f4dd998c607c5f8351f4c10ea768def06e8e2defafffafca5fe3876d98d9b123"
	I1212 00:14:08.799858  499178 cri.go:89] found id: "10211afe59632799435b4008dd96430e1edb4a1cc399809c32273577dfd7cd61"
	I1212 00:14:08.799862  499178 cri.go:89] found id: "7e478b538e97db66e0de68ed3ade2ff6d3d2420a89b4bad65e8158d500e16aae"
	I1212 00:14:08.799865  499178 cri.go:89] found id: "810bdb88faff8bb6f2eca85e10545aa7edde43a7452f29a88bc8f3d2c032b8df"
	I1212 00:14:08.799868  499178 cri.go:89] found id: "8f971c589eb18130d181fe2c7aa31da3304b9d3a3c2f5c74aa810a8426636a2a"
	I1212 00:14:08.799871  499178 cri.go:89] found id: ""
	I1212 00:14:08.799923  499178 ssh_runner.go:195] Run: sudo runc list -f json
	I1212 00:14:08.817313  499178 out.go:203] 
	W1212 00:14:08.820867  499178 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T00:14:08Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T00:14:08Z" level=error msg="open /run/runc: no such file or directory"
	
	W1212 00:14:08.820896  499178 out.go:285] * 
	* 
	W1212 00:14:08.829355  499178 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 00:14:08.833616  499178 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-arm64 -p addons-199484 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (9.45s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.32s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-4jhc7" [cb7157aa-9a23-4982-ad13-4bef3501f23d] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004198031s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-199484 addons disable nvidia-device-plugin --alsologtostderr -v=1
2025/12/12 00:13:59 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-199484 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (310.137223ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 00:13:59.163822  498776 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:13:59.165113  498776 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:13:59.165134  498776 out.go:374] Setting ErrFile to fd 2...
	I1212 00:13:59.165141  498776 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:13:59.165576  498776 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-487723/.minikube/bin
	I1212 00:13:59.165987  498776 mustload.go:66] Loading cluster: addons-199484
	I1212 00:13:59.166789  498776 config.go:182] Loaded profile config "addons-199484": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:13:59.166816  498776 addons.go:622] checking whether the cluster is paused
	I1212 00:13:59.166997  498776 config.go:182] Loaded profile config "addons-199484": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:13:59.167017  498776 host.go:66] Checking if "addons-199484" exists ...
	I1212 00:13:59.167561  498776 cli_runner.go:164] Run: docker container inspect addons-199484 --format={{.State.Status}}
	I1212 00:13:59.192933  498776 ssh_runner.go:195] Run: systemctl --version
	I1212 00:13:59.193028  498776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199484
	I1212 00:13:59.212894  498776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/addons-199484/id_rsa Username:docker}
	I1212 00:13:59.322706  498776 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 00:13:59.322784  498776 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 00:13:59.355379  498776 cri.go:89] found id: "0b887da02a72ca5e9db931fa0b78ed0c10baf6d179621e8b3b144a117a55809d"
	I1212 00:13:59.355412  498776 cri.go:89] found id: "0104cc6b5dd424933a9226ae25ad89956edefad068d133eb18cfe430e71b64ac"
	I1212 00:13:59.355418  498776 cri.go:89] found id: "c47d48a718439532bd951ca0cdfae6827283df3ae31939423aeb7181a556e753"
	I1212 00:13:59.355422  498776 cri.go:89] found id: "5bff244d59411c9a273c0a7b1039b628bff021a4fcc767ce6393c52e86beb8eb"
	I1212 00:13:59.355425  498776 cri.go:89] found id: "ef710450ec222b3bb9dce827f650fe3b3d671f8886111935c0833ebaf845350b"
	I1212 00:13:59.355439  498776 cri.go:89] found id: "a10cfd4bcf4a619b3dd7e7bec66661fc97cca6bb81d6d42138f98dd802da82b5"
	I1212 00:13:59.355444  498776 cri.go:89] found id: "7adec1475ef10259911007f4aed32a65a52312ba2c8d26c991fc9f115e2afc7e"
	I1212 00:13:59.355446  498776 cri.go:89] found id: "ebe4e7c9ca0fc77879706813f9313a141bda416c1327e75bfc10b883dde9afe7"
	I1212 00:13:59.355450  498776 cri.go:89] found id: "7859d680677c9320a6b97dc99f20c809caf1cf0a0e02a9680dd377acc63b6976"
	I1212 00:13:59.355456  498776 cri.go:89] found id: "e47e9aabb94d9f6577691b06bc3594ad26b704954d37c8f5750e1b8ae813479b"
	I1212 00:13:59.355466  498776 cri.go:89] found id: "afc99c45439ad0e7765f5c4c99793d9a7abbbc0fbe4ee7679500e1d0d406f9cc"
	I1212 00:13:59.355469  498776 cri.go:89] found id: "e021d26e2771d4b836e3c2ff9a3c8340d8a7f191d6258abe1cbcbd9602298f76"
	I1212 00:13:59.355472  498776 cri.go:89] found id: "9da22fcbd3de39035ac1a03e1a791e6c51948745f63a0cd5880aee271a0b93c4"
	I1212 00:13:59.355475  498776 cri.go:89] found id: "12d12a2561d73ca125841a672690c99e84b5fb54f11b8b04257c2e1ab7f1a247"
	I1212 00:13:59.355479  498776 cri.go:89] found id: "549fae89400cf4efbe803ae4aa702097163dfc5e9a131def0ae6bbecb4c0601e"
	I1212 00:13:59.355491  498776 cri.go:89] found id: "e4aaf8d36273d3acde491a8cd14406fcfbfeebc26d9e26894b4170e98f011d9a"
	I1212 00:13:59.355494  498776 cri.go:89] found id: "be3ca683626781d8cf4bacd424bf231f28a131d46b225751ad657dc8a00878f1"
	I1212 00:13:59.355499  498776 cri.go:89] found id: "e251865f884a70fae76b65618090dc9e6abcf3315601089443dc5fb1bd026fb1"
	I1212 00:13:59.355502  498776 cri.go:89] found id: "f4dd998c607c5f8351f4c10ea768def06e8e2defafffafca5fe3876d98d9b123"
	I1212 00:13:59.355505  498776 cri.go:89] found id: "10211afe59632799435b4008dd96430e1edb4a1cc399809c32273577dfd7cd61"
	I1212 00:13:59.355510  498776 cri.go:89] found id: "7e478b538e97db66e0de68ed3ade2ff6d3d2420a89b4bad65e8158d500e16aae"
	I1212 00:13:59.355514  498776 cri.go:89] found id: "810bdb88faff8bb6f2eca85e10545aa7edde43a7452f29a88bc8f3d2c032b8df"
	I1212 00:13:59.355517  498776 cri.go:89] found id: "8f971c589eb18130d181fe2c7aa31da3304b9d3a3c2f5c74aa810a8426636a2a"
	I1212 00:13:59.355519  498776 cri.go:89] found id: ""
	I1212 00:13:59.355576  498776 ssh_runner.go:195] Run: sudo runc list -f json
	I1212 00:13:59.372422  498776 out.go:203] 
	W1212 00:13:59.375425  498776 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T00:13:59Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T00:13:59Z" level=error msg="open /run/runc: no such file or directory"
	
	W1212 00:13:59.375498  498776 out.go:285] * 
	* 
	W1212 00:13:59.382984  498776 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 00:13:59.386000  498776 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-arm64 -p addons-199484 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (6.32s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.27s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-5ff678cb9-54gdk" [09f5d4ce-f502-4f3c-9f5c-ee62d23edf84] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003862132s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-199484 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-199484 addons disable yakd --alsologtostderr -v=1: exit status 11 (265.554774ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 00:13:52.859302  498617 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:13:52.860162  498617 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:13:52.860179  498617 out.go:374] Setting ErrFile to fd 2...
	I1212 00:13:52.860185  498617 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:13:52.860468  498617 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-487723/.minikube/bin
	I1212 00:13:52.860755  498617 mustload.go:66] Loading cluster: addons-199484
	I1212 00:13:52.861136  498617 config.go:182] Loaded profile config "addons-199484": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:13:52.861154  498617 addons.go:622] checking whether the cluster is paused
	I1212 00:13:52.861262  498617 config.go:182] Loaded profile config "addons-199484": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:13:52.861277  498617 host.go:66] Checking if "addons-199484" exists ...
	I1212 00:13:52.861831  498617 cli_runner.go:164] Run: docker container inspect addons-199484 --format={{.State.Status}}
	I1212 00:13:52.881359  498617 ssh_runner.go:195] Run: systemctl --version
	I1212 00:13:52.881433  498617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199484
	I1212 00:13:52.899665  498617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/addons-199484/id_rsa Username:docker}
	I1212 00:13:53.007122  498617 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 00:13:53.007218  498617 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 00:13:53.039277  498617 cri.go:89] found id: "0b887da02a72ca5e9db931fa0b78ed0c10baf6d179621e8b3b144a117a55809d"
	I1212 00:13:53.039300  498617 cri.go:89] found id: "0104cc6b5dd424933a9226ae25ad89956edefad068d133eb18cfe430e71b64ac"
	I1212 00:13:53.039306  498617 cri.go:89] found id: "c47d48a718439532bd951ca0cdfae6827283df3ae31939423aeb7181a556e753"
	I1212 00:13:53.039311  498617 cri.go:89] found id: "5bff244d59411c9a273c0a7b1039b628bff021a4fcc767ce6393c52e86beb8eb"
	I1212 00:13:53.039315  498617 cri.go:89] found id: "ef710450ec222b3bb9dce827f650fe3b3d671f8886111935c0833ebaf845350b"
	I1212 00:13:53.039319  498617 cri.go:89] found id: "a10cfd4bcf4a619b3dd7e7bec66661fc97cca6bb81d6d42138f98dd802da82b5"
	I1212 00:13:53.039322  498617 cri.go:89] found id: "7adec1475ef10259911007f4aed32a65a52312ba2c8d26c991fc9f115e2afc7e"
	I1212 00:13:53.039326  498617 cri.go:89] found id: "ebe4e7c9ca0fc77879706813f9313a141bda416c1327e75bfc10b883dde9afe7"
	I1212 00:13:53.039329  498617 cri.go:89] found id: "7859d680677c9320a6b97dc99f20c809caf1cf0a0e02a9680dd377acc63b6976"
	I1212 00:13:53.039336  498617 cri.go:89] found id: "e47e9aabb94d9f6577691b06bc3594ad26b704954d37c8f5750e1b8ae813479b"
	I1212 00:13:53.039340  498617 cri.go:89] found id: "afc99c45439ad0e7765f5c4c99793d9a7abbbc0fbe4ee7679500e1d0d406f9cc"
	I1212 00:13:53.039344  498617 cri.go:89] found id: "e021d26e2771d4b836e3c2ff9a3c8340d8a7f191d6258abe1cbcbd9602298f76"
	I1212 00:13:53.039347  498617 cri.go:89] found id: "9da22fcbd3de39035ac1a03e1a791e6c51948745f63a0cd5880aee271a0b93c4"
	I1212 00:13:53.039356  498617 cri.go:89] found id: "12d12a2561d73ca125841a672690c99e84b5fb54f11b8b04257c2e1ab7f1a247"
	I1212 00:13:53.039360  498617 cri.go:89] found id: "549fae89400cf4efbe803ae4aa702097163dfc5e9a131def0ae6bbecb4c0601e"
	I1212 00:13:53.039365  498617 cri.go:89] found id: "e4aaf8d36273d3acde491a8cd14406fcfbfeebc26d9e26894b4170e98f011d9a"
	I1212 00:13:53.039372  498617 cri.go:89] found id: "be3ca683626781d8cf4bacd424bf231f28a131d46b225751ad657dc8a00878f1"
	I1212 00:13:53.039377  498617 cri.go:89] found id: "e251865f884a70fae76b65618090dc9e6abcf3315601089443dc5fb1bd026fb1"
	I1212 00:13:53.039380  498617 cri.go:89] found id: "f4dd998c607c5f8351f4c10ea768def06e8e2defafffafca5fe3876d98d9b123"
	I1212 00:13:53.039384  498617 cri.go:89] found id: "10211afe59632799435b4008dd96430e1edb4a1cc399809c32273577dfd7cd61"
	I1212 00:13:53.039389  498617 cri.go:89] found id: "7e478b538e97db66e0de68ed3ade2ff6d3d2420a89b4bad65e8158d500e16aae"
	I1212 00:13:53.039396  498617 cri.go:89] found id: "810bdb88faff8bb6f2eca85e10545aa7edde43a7452f29a88bc8f3d2c032b8df"
	I1212 00:13:53.039401  498617 cri.go:89] found id: "8f971c589eb18130d181fe2c7aa31da3304b9d3a3c2f5c74aa810a8426636a2a"
	I1212 00:13:53.039404  498617 cri.go:89] found id: ""
	I1212 00:13:53.039459  498617 ssh_runner.go:195] Run: sudo runc list -f json
	I1212 00:13:53.056235  498617 out.go:203] 
	W1212 00:13:53.059225  498617 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T00:13:53Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T00:13:53Z" level=error msg="open /run/runc: no such file or directory"
	
	W1212 00:13:53.059261  498617 out.go:285] * 
	* 
	W1212 00:13:53.065656  498617 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 00:13:53.068635  498617 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable yakd addon: args "out/minikube-linux-arm64 -p addons-199484 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (6.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (503.21s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-035643 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
E1212 00:21:17.458868  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/addons-199484/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:23:33.595348  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/addons-199484/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:24:01.300639  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/addons-199484/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:25:17.605081  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-921447/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:25:17.611823  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-921447/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:25:17.623368  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-921447/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:25:17.644770  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-921447/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:25:17.686304  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-921447/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:25:17.767889  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-921447/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:25:17.929456  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-921447/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:25:18.251237  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-921447/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:25:18.893288  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-921447/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:25:20.174757  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-921447/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:25:22.736766  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-921447/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:25:27.858141  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-921447/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:25:38.099504  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-921447/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:25:58.581741  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-921447/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:26:39.544514  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-921447/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:28:01.468953  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-921447/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:28:33.592594  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/addons-199484/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-035643 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 109 (8m21.776312832s)

                                                
                                                
-- stdout --
	* [functional-035643] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22101
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22101-487723/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-487723/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "functional-035643" primary control-plane node in "functional-035643" cluster
	* Pulling base image v0.0.48-1765275396-22083 ...
	* Found network options:
	  - HTTP_PROXY=localhost:42613
	* Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	* Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:42613 to docker env.
	! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-035643 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-035643 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000281571s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00032218s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00032218s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
functional_test.go:2241: failed minikube start. args "out/minikube-linux-arm64 start -p functional-035643 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0": exit status 109
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-035643
helpers_test.go:244: (dbg) docker inspect functional-035643:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "02b8c8e636a5f9c9930bd279101be257c50eb00805c3c8fd0285e10206ff115a",
	        "Created": "2025-12-12T00:21:16.539894649Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 519641,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T00:21:16.600605162Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/02b8c8e636a5f9c9930bd279101be257c50eb00805c3c8fd0285e10206ff115a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/02b8c8e636a5f9c9930bd279101be257c50eb00805c3c8fd0285e10206ff115a/hostname",
	        "HostsPath": "/var/lib/docker/containers/02b8c8e636a5f9c9930bd279101be257c50eb00805c3c8fd0285e10206ff115a/hosts",
	        "LogPath": "/var/lib/docker/containers/02b8c8e636a5f9c9930bd279101be257c50eb00805c3c8fd0285e10206ff115a/02b8c8e636a5f9c9930bd279101be257c50eb00805c3c8fd0285e10206ff115a-json.log",
	        "Name": "/functional-035643",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-035643:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-035643",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "02b8c8e636a5f9c9930bd279101be257c50eb00805c3c8fd0285e10206ff115a",
	                "LowerDir": "/var/lib/docker/overlay2/fac4f7ad025123b29aaa4f718217dd9d72b542fc9949b8afc2ffc326afc38542-init/diff:/var/lib/docker/overlay2/312acdcca8c5c90ada236fa0dd866f841348e5b8485928af37d3628cccc20197/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fac4f7ad025123b29aaa4f718217dd9d72b542fc9949b8afc2ffc326afc38542/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fac4f7ad025123b29aaa4f718217dd9d72b542fc9949b8afc2ffc326afc38542/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fac4f7ad025123b29aaa4f718217dd9d72b542fc9949b8afc2ffc326afc38542/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-035643",
	                "Source": "/var/lib/docker/volumes/functional-035643/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-035643",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-035643",
	                "name.minikube.sigs.k8s.io": "functional-035643",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ede6a17442d6bf83b8f4c9f93f252345cec3d0406f82de2d6bd2cfd4713e2163",
	            "SandboxKey": "/var/run/docker/netns/ede6a17442d6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33183"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33184"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33187"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33185"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33186"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-035643": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "52:d5:12:89:ea:40",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ad01995b183fdebead6c725e2b942ae8ce2d3964b3552789fe5b50ee7e7239a3",
	                    "EndpointID": "d429a1cd0f840d042af4ad7ea0bda6067a342be7fb552083411004a3604b0124",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-035643",
	                        "02b8c8e636a5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-035643 -n functional-035643
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-035643 -n functional-035643: exit status 6 (305.975437ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 00:29:33.672085  524775 status.go:458] kubeconfig endpoint: get endpoint: "functional-035643" does not appear in /home/jenkins/minikube-integration/22101-487723/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-035643 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                           ARGS                                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-921447 image save kicbase/echo-server:functional-921447 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr │ functional-921447 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ image          │ functional-921447 image rm kicbase/echo-server:functional-921447 --alsologtostderr                                                                        │ functional-921447 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ image          │ functional-921447 image ls                                                                                                                                │ functional-921447 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ ssh            │ functional-921447 ssh sudo cat /etc/test/nested/copy/490954/hosts                                                                                         │ functional-921447 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ image          │ functional-921447 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-921447 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ ssh            │ functional-921447 ssh sudo cat /etc/ssl/certs/490954.pem                                                                                                  │ functional-921447 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ image          │ functional-921447 image ls                                                                                                                                │ functional-921447 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ ssh            │ functional-921447 ssh sudo cat /usr/share/ca-certificates/490954.pem                                                                                      │ functional-921447 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ image          │ functional-921447 image save --daemon kicbase/echo-server:functional-921447 --alsologtostderr                                                             │ functional-921447 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ ssh            │ functional-921447 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                  │ functional-921447 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ ssh            │ functional-921447 ssh sudo cat /etc/ssl/certs/4909542.pem                                                                                                 │ functional-921447 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ ssh            │ functional-921447 ssh sudo cat /usr/share/ca-certificates/4909542.pem                                                                                     │ functional-921447 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ ssh            │ functional-921447 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                  │ functional-921447 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ update-context │ functional-921447 update-context --alsologtostderr -v=2                                                                                                   │ functional-921447 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ update-context │ functional-921447 update-context --alsologtostderr -v=2                                                                                                   │ functional-921447 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ update-context │ functional-921447 update-context --alsologtostderr -v=2                                                                                                   │ functional-921447 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ image          │ functional-921447 image ls --format yaml --alsologtostderr                                                                                                │ functional-921447 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ image          │ functional-921447 image ls --format short --alsologtostderr                                                                                               │ functional-921447 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ image          │ functional-921447 image ls --format json --alsologtostderr                                                                                                │ functional-921447 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ ssh            │ functional-921447 ssh pgrep buildkitd                                                                                                                     │ functional-921447 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │                     │
	│ image          │ functional-921447 image ls --format table --alsologtostderr                                                                                               │ functional-921447 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ image          │ functional-921447 image build -t localhost/my-image:functional-921447 testdata/build --alsologtostderr                                                    │ functional-921447 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ image          │ functional-921447 image ls                                                                                                                                │ functional-921447 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ delete         │ -p functional-921447                                                                                                                                      │ functional-921447 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ start          │ -p functional-035643 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0         │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 00:21:11
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 00:21:11.627086  519254 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:21:11.627193  519254 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:21:11.627197  519254 out.go:374] Setting ErrFile to fd 2...
	I1212 00:21:11.627201  519254 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:21:11.627438  519254 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-487723/.minikube/bin
	I1212 00:21:11.627843  519254 out.go:368] Setting JSON to false
	I1212 00:21:11.628671  519254 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":11017,"bootTime":1765487855,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1212 00:21:11.628728  519254 start.go:143] virtualization:  
	I1212 00:21:11.633458  519254 out.go:179] * [functional-035643] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 00:21:11.637340  519254 out.go:179]   - MINIKUBE_LOCATION=22101
	I1212 00:21:11.637383  519254 notify.go:221] Checking for updates...
	I1212 00:21:11.641654  519254 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:21:11.645123  519254 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22101-487723/kubeconfig
	I1212 00:21:11.648485  519254 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-487723/.minikube
	I1212 00:21:11.651824  519254 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 00:21:11.655049  519254 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:21:11.658397  519254 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 00:21:11.679736  519254 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 00:21:11.679847  519254 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:21:11.753539  519254 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-12 00:21:11.739469705 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 00:21:11.753629  519254 docker.go:319] overlay module found
	I1212 00:21:11.757052  519254 out.go:179] * Using the docker driver based on user configuration
	I1212 00:21:11.760289  519254 start.go:309] selected driver: docker
	I1212 00:21:11.760300  519254 start.go:927] validating driver "docker" against <nil>
	I1212 00:21:11.760320  519254 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:21:11.761035  519254 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:21:11.826946  519254 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-12 00:21:11.817765189 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 00:21:11.827102  519254 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1212 00:21:11.827338  519254 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 00:21:11.830450  519254 out.go:179] * Using Docker driver with root privileges
	I1212 00:21:11.833451  519254 cni.go:84] Creating CNI manager for ""
	I1212 00:21:11.833510  519254 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 00:21:11.833517  519254 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1212 00:21:11.833597  519254 start.go:353] cluster config:
	{Name:functional-035643 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-035643 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:21:11.836878  519254 out.go:179] * Starting "functional-035643" primary control-plane node in "functional-035643" cluster
	I1212 00:21:11.839802  519254 cache.go:134] Beginning downloading kic base image for docker with crio
	I1212 00:21:11.842797  519254 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1212 00:21:11.845781  519254 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1212 00:21:11.845832  519254 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22101-487723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1212 00:21:11.845840  519254 cache.go:65] Caching tarball of preloaded images
	I1212 00:21:11.845874  519254 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1212 00:21:11.845928  519254 preload.go:238] Found /home/jenkins/minikube-integration/22101-487723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1212 00:21:11.845938  519254 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1212 00:21:11.846288  519254 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/config.json ...
	I1212 00:21:11.846306  519254 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/config.json: {Name:mk0175725921315771ee4ce783e89388936d249b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:21:11.865650  519254 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1212 00:21:11.865661  519254 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1212 00:21:11.865673  519254 cache.go:243] Successfully downloaded all kic artifacts
	I1212 00:21:11.865702  519254 start.go:360] acquireMachinesLock for functional-035643: {Name:mkb0cdc7d354412594dc63c0234fde00134e758d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 00:21:11.865806  519254 start.go:364] duration metric: took 90.188µs to acquireMachinesLock for "functional-035643"
	I1212 00:21:11.865831  519254 start.go:93] Provisioning new machine with config: &{Name:functional-035643 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-035643 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 00:21:11.865905  519254 start.go:125] createHost starting for "" (driver="docker")
	I1212 00:21:11.869326  519254 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	W1212 00:21:11.869595  519254 out.go:285] ! Local proxy ignored: not passing HTTP_PROXY=localhost:42613 to docker env.
	I1212 00:21:11.869619  519254 start.go:159] libmachine.API.Create for "functional-035643" (driver="docker")
	I1212 00:21:11.869640  519254 client.go:173] LocalClient.Create starting
	I1212 00:21:11.869700  519254 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca.pem
	I1212 00:21:11.869735  519254 main.go:143] libmachine: Decoding PEM data...
	I1212 00:21:11.869749  519254 main.go:143] libmachine: Parsing certificate...
	I1212 00:21:11.869798  519254 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22101-487723/.minikube/certs/cert.pem
	I1212 00:21:11.869820  519254 main.go:143] libmachine: Decoding PEM data...
	I1212 00:21:11.869830  519254 main.go:143] libmachine: Parsing certificate...
	I1212 00:21:11.870169  519254 cli_runner.go:164] Run: docker network inspect functional-035643 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1212 00:21:11.886360  519254 cli_runner.go:211] docker network inspect functional-035643 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1212 00:21:11.886438  519254 network_create.go:284] running [docker network inspect functional-035643] to gather additional debugging logs...
	I1212 00:21:11.886452  519254 cli_runner.go:164] Run: docker network inspect functional-035643
	W1212 00:21:11.905660  519254 cli_runner.go:211] docker network inspect functional-035643 returned with exit code 1
	I1212 00:21:11.905680  519254 network_create.go:287] error running [docker network inspect functional-035643]: docker network inspect functional-035643: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network functional-035643 not found
	I1212 00:21:11.905692  519254 network_create.go:289] output of [docker network inspect functional-035643]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network functional-035643 not found
	
	** /stderr **
	I1212 00:21:11.905809  519254 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 00:21:11.924479  519254 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400167cf30}
	I1212 00:21:11.924514  519254 network_create.go:124] attempt to create docker network functional-035643 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1212 00:21:11.924579  519254 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-035643 functional-035643
	I1212 00:21:11.982586  519254 network_create.go:108] docker network functional-035643 192.168.49.0/24 created
	I1212 00:21:11.982608  519254 kic.go:121] calculated static IP "192.168.49.2" for the "functional-035643" container
	I1212 00:21:11.982763  519254 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1212 00:21:11.999794  519254 cli_runner.go:164] Run: docker volume create functional-035643 --label name.minikube.sigs.k8s.io=functional-035643 --label created_by.minikube.sigs.k8s.io=true
	I1212 00:21:12.020752  519254 oci.go:103] Successfully created a docker volume functional-035643
	I1212 00:21:12.020915  519254 cli_runner.go:164] Run: docker run --rm --name functional-035643-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-035643 --entrypoint /usr/bin/test -v functional-035643:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1212 00:21:12.520498  519254 oci.go:107] Successfully prepared a docker volume functional-035643
	I1212 00:21:12.520560  519254 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1212 00:21:12.520568  519254 kic.go:194] Starting extracting preloaded images to volume ...
	I1212 00:21:12.520637  519254 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22101-487723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v functional-035643:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1212 00:21:16.464275  519254 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22101-487723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v functional-035643:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (3.943605044s)
	I1212 00:21:16.464297  519254 kic.go:203] duration metric: took 3.943726256s to extract preloaded images to volume ...
	W1212 00:21:16.464436  519254 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1212 00:21:16.464536  519254 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1212 00:21:16.525302  519254 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname functional-035643 --name functional-035643 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-035643 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=functional-035643 --network functional-035643 --ip 192.168.49.2 --volume functional-035643:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8441 --publish=127.0.0.1::8441 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1212 00:21:16.828585  519254 cli_runner.go:164] Run: docker container inspect functional-035643 --format={{.State.Running}}
	I1212 00:21:16.856305  519254 cli_runner.go:164] Run: docker container inspect functional-035643 --format={{.State.Status}}
	I1212 00:21:16.879272  519254 cli_runner.go:164] Run: docker exec functional-035643 stat /var/lib/dpkg/alternatives/iptables
	I1212 00:21:16.927747  519254 oci.go:144] the created container "functional-035643" has a running status.
	I1212 00:21:16.927765  519254 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22101-487723/.minikube/machines/functional-035643/id_rsa...
	I1212 00:21:17.150975  519254 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22101-487723/.minikube/machines/functional-035643/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1212 00:21:17.170899  519254 cli_runner.go:164] Run: docker container inspect functional-035643 --format={{.State.Status}}
	I1212 00:21:17.189548  519254 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1212 00:21:17.189560  519254 kic_runner.go:114] Args: [docker exec --privileged functional-035643 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1212 00:21:17.252969  519254 cli_runner.go:164] Run: docker container inspect functional-035643 --format={{.State.Status}}
	I1212 00:21:17.278326  519254 machine.go:94] provisionDockerMachine start ...
	I1212 00:21:17.278410  519254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
	I1212 00:21:17.303750  519254 main.go:143] libmachine: Using SSH client type: native
	I1212 00:21:17.304068  519254 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1212 00:21:17.304074  519254 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 00:21:17.304619  519254 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49292->127.0.0.1:33183: read: connection reset by peer
	I1212 00:21:20.454445  519254 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-035643
	
	I1212 00:21:20.454460  519254 ubuntu.go:182] provisioning hostname "functional-035643"
	I1212 00:21:20.454531  519254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
	I1212 00:21:20.471811  519254 main.go:143] libmachine: Using SSH client type: native
	I1212 00:21:20.472122  519254 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1212 00:21:20.472130  519254 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-035643 && echo "functional-035643" | sudo tee /etc/hostname
	I1212 00:21:20.631890  519254 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-035643
	
	I1212 00:21:20.631957  519254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
	I1212 00:21:20.649368  519254 main.go:143] libmachine: Using SSH client type: native
	I1212 00:21:20.649670  519254 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1212 00:21:20.649682  519254 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-035643' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-035643/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-035643' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 00:21:20.798945  519254 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 00:21:20.798962  519254 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22101-487723/.minikube CaCertPath:/home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22101-487723/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22101-487723/.minikube}
	I1212 00:21:20.798982  519254 ubuntu.go:190] setting up certificates
	I1212 00:21:20.798990  519254 provision.go:84] configureAuth start
	I1212 00:21:20.799049  519254 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-035643
	I1212 00:21:20.816735  519254 provision.go:143] copyHostCerts
	I1212 00:21:20.816793  519254 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-487723/.minikube/cert.pem, removing ...
	I1212 00:21:20.816800  519254 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-487723/.minikube/cert.pem
	I1212 00:21:20.816875  519254 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22101-487723/.minikube/cert.pem (1123 bytes)
	I1212 00:21:20.816967  519254 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-487723/.minikube/key.pem, removing ...
	I1212 00:21:20.816971  519254 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-487723/.minikube/key.pem
	I1212 00:21:20.816996  519254 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22101-487723/.minikube/key.pem (1679 bytes)
	I1212 00:21:20.817052  519254 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-487723/.minikube/ca.pem, removing ...
	I1212 00:21:20.817060  519254 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-487723/.minikube/ca.pem
	I1212 00:21:20.817083  519254 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22101-487723/.minikube/ca.pem (1078 bytes)
	I1212 00:21:20.817126  519254 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22101-487723/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca-key.pem org=jenkins.functional-035643 san=[127.0.0.1 192.168.49.2 functional-035643 localhost minikube]
	I1212 00:21:20.909324  519254 provision.go:177] copyRemoteCerts
	I1212 00:21:20.909374  519254 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 00:21:20.909420  519254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
	I1212 00:21:20.926013  519254 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/functional-035643/id_rsa Username:docker}
	I1212 00:21:21.030331  519254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 00:21:21.047434  519254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 00:21:21.063846  519254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 00:21:21.081681  519254 provision.go:87] duration metric: took 282.670455ms to configureAuth
	I1212 00:21:21.081698  519254 ubuntu.go:206] setting minikube options for container-runtime
	I1212 00:21:21.081885  519254 config.go:182] Loaded profile config "functional-035643": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 00:21:21.081982  519254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
	I1212 00:21:21.100309  519254 main.go:143] libmachine: Using SSH client type: native
	I1212 00:21:21.100620  519254 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1212 00:21:21.100641  519254 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 00:21:21.406076  519254 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 00:21:21.406087  519254 machine.go:97] duration metric: took 4.127744006s to provisionDockerMachine
	I1212 00:21:21.406097  519254 client.go:176] duration metric: took 9.536452s to LocalClient.Create
	I1212 00:21:21.406116  519254 start.go:167] duration metric: took 9.536497152s to libmachine.API.Create "functional-035643"
	I1212 00:21:21.406122  519254 start.go:293] postStartSetup for "functional-035643" (driver="docker")
	I1212 00:21:21.406132  519254 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 00:21:21.406206  519254 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 00:21:21.406252  519254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
	I1212 00:21:21.423671  519254 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/functional-035643/id_rsa Username:docker}
	I1212 00:21:21.527827  519254 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 00:21:21.531324  519254 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 00:21:21.531341  519254 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 00:21:21.531351  519254 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-487723/.minikube/addons for local assets ...
	I1212 00:21:21.531406  519254 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-487723/.minikube/files for local assets ...
	I1212 00:21:21.531499  519254 filesync.go:149] local asset: /home/jenkins/minikube-integration/22101-487723/.minikube/files/etc/ssl/certs/4909542.pem -> 4909542.pem in /etc/ssl/certs
	I1212 00:21:21.531578  519254 filesync.go:149] local asset: /home/jenkins/minikube-integration/22101-487723/.minikube/files/etc/test/nested/copy/490954/hosts -> hosts in /etc/test/nested/copy/490954
	I1212 00:21:21.531620  519254 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/490954
	I1212 00:21:21.539563  519254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/files/etc/ssl/certs/4909542.pem --> /etc/ssl/certs/4909542.pem (1708 bytes)
	I1212 00:21:21.556730  519254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/files/etc/test/nested/copy/490954/hosts --> /etc/test/nested/copy/490954/hosts (40 bytes)
	I1212 00:21:21.574516  519254 start.go:296] duration metric: took 168.381845ms for postStartSetup
	I1212 00:21:21.574901  519254 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-035643
	I1212 00:21:21.591452  519254 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/config.json ...
	I1212 00:21:21.591729  519254 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 00:21:21.591768  519254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
	I1212 00:21:21.608497  519254 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/functional-035643/id_rsa Username:docker}
	I1212 00:21:21.712003  519254 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 00:21:21.716791  519254 start.go:128] duration metric: took 9.850871969s to createHost
	I1212 00:21:21.716805  519254 start.go:83] releasing machines lock for "functional-035643", held for 9.850992369s
	I1212 00:21:21.716883  519254 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-035643
	I1212 00:21:21.736461  519254 out.go:179] * Found network options:
	I1212 00:21:21.739430  519254 out.go:179]   - HTTP_PROXY=localhost:42613
	W1212 00:21:21.742274  519254 out.go:285] ! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	I1212 00:21:21.745106  519254 out.go:179] * Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	I1212 00:21:21.747956  519254 ssh_runner.go:195] Run: cat /version.json
	I1212 00:21:21.747996  519254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
	I1212 00:21:21.748032  519254 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 00:21:21.748084  519254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
	I1212 00:21:21.766470  519254 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/functional-035643/id_rsa Username:docker}
	I1212 00:21:21.772212  519254 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/functional-035643/id_rsa Username:docker}
	I1212 00:21:21.870311  519254 ssh_runner.go:195] Run: systemctl --version
	I1212 00:21:21.974627  519254 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 00:21:22.012665  519254 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 00:21:22.017136  519254 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 00:21:22.017209  519254 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 00:21:22.045359  519254 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1212 00:21:22.045372  519254 start.go:496] detecting cgroup driver to use...
	I1212 00:21:22.045403  519254 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 00:21:22.045451  519254 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 00:21:22.063763  519254 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 00:21:22.077123  519254 docker.go:218] disabling cri-docker service (if available) ...
	I1212 00:21:22.077180  519254 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 00:21:22.094873  519254 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 00:21:22.113649  519254 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 00:21:22.232606  519254 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 00:21:22.354936  519254 docker.go:234] disabling docker service ...
	I1212 00:21:22.355001  519254 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 00:21:22.375868  519254 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 00:21:22.389931  519254 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 00:21:22.509968  519254 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 00:21:22.626169  519254 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 00:21:22.639085  519254 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 00:21:22.653171  519254 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1212 00:21:22.653243  519254 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:21:22.662362  519254 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 00:21:22.662433  519254 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:21:22.671788  519254 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:21:22.685825  519254 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:21:22.695201  519254 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 00:21:22.703524  519254 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:21:22.712048  519254 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:21:22.725679  519254 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:21:22.734813  519254 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 00:21:22.742174  519254 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 00:21:22.749237  519254 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:21:22.856672  519254 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 00:21:23.045668  519254 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 00:21:23.045731  519254 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 00:21:23.049601  519254 start.go:564] Will wait 60s for crictl version
	I1212 00:21:23.049668  519254 ssh_runner.go:195] Run: which crictl
	I1212 00:21:23.053482  519254 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 00:21:23.080004  519254 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1212 00:21:23.080080  519254 ssh_runner.go:195] Run: crio --version
	I1212 00:21:23.111644  519254 ssh_runner.go:195] Run: crio --version
	I1212 00:21:23.145318  519254 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1212 00:21:23.148294  519254 cli_runner.go:164] Run: docker network inspect functional-035643 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 00:21:23.164506  519254 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1212 00:21:23.168319  519254 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 00:21:23.177928  519254 kubeadm.go:884] updating cluster {Name:functional-035643 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-035643 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 00:21:23.178043  519254 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1212 00:21:23.178102  519254 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:21:23.214312  519254 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 00:21:23.214325  519254 crio.go:433] Images already preloaded, skipping extraction
	I1212 00:21:23.214387  519254 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:21:23.241598  519254 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 00:21:23.241609  519254 cache_images.go:86] Images are preloaded, skipping loading
	I1212 00:21:23.241615  519254 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1212 00:21:23.241713  519254 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-035643 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-035643 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 00:21:23.241801  519254 ssh_runner.go:195] Run: crio config
	I1212 00:21:23.304890  519254 cni.go:84] Creating CNI manager for ""
	I1212 00:21:23.304903  519254 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 00:21:23.304918  519254 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 00:21:23.304947  519254 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-035643 NodeName:functional-035643 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 00:21:23.305081  519254 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-035643"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 00:21:23.305171  519254 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 00:21:23.312930  519254 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 00:21:23.312993  519254 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 00:21:23.320675  519254 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1212 00:21:23.333818  519254 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1212 00:21:23.347461  519254 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1212 00:21:23.360430  519254 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1212 00:21:23.363942  519254 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 00:21:23.373772  519254 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:21:23.498037  519254 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:21:23.515424  519254 certs.go:69] Setting up /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643 for IP: 192.168.49.2
	I1212 00:21:23.515435  519254 certs.go:195] generating shared ca certs ...
	I1212 00:21:23.515450  519254 certs.go:227] acquiring lock for ca certs: {Name:mk856824cf2126fa3d2975ef18e195b6ab1234f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:21:23.515589  519254 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22101-487723/.minikube/ca.key
	I1212 00:21:23.515628  519254 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22101-487723/.minikube/proxy-client-ca.key
	I1212 00:21:23.515634  519254 certs.go:257] generating profile certs ...
	I1212 00:21:23.515687  519254 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/client.key
	I1212 00:21:23.515697  519254 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/client.crt with IP's: []
	I1212 00:21:23.791847  519254 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/client.crt ...
	I1212 00:21:23.791862  519254 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/client.crt: {Name:mk64e8dea668554777c58e02b8db3c2c82c3aa5f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:21:23.792075  519254 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/client.key ...
	I1212 00:21:23.792081  519254 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/client.key: {Name:mkabfca2eecde04df80ab5df76e3b39e3559cd9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:21:23.792188  519254 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/apiserver.key.8a9a2493
	I1212 00:21:23.792200  519254 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/apiserver.crt.8a9a2493 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1212 00:21:23.953155  519254 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/apiserver.crt.8a9a2493 ...
	I1212 00:21:23.953169  519254 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/apiserver.crt.8a9a2493: {Name:mk2fe9199266faeb91197aa94275e86f7c1c1999 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:21:23.953361  519254 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/apiserver.key.8a9a2493 ...
	I1212 00:21:23.953369  519254 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/apiserver.key.8a9a2493: {Name:mkdc608250cb4874dcfb5c01dd68f4479d717047 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:21:23.953473  519254 certs.go:382] copying /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/apiserver.crt.8a9a2493 -> /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/apiserver.crt
	I1212 00:21:23.953548  519254 certs.go:386] copying /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/apiserver.key.8a9a2493 -> /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/apiserver.key
	I1212 00:21:23.953600  519254 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/proxy-client.key
	I1212 00:21:23.953614  519254 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/proxy-client.crt with IP's: []
	I1212 00:21:24.208545  519254 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/proxy-client.crt ...
	I1212 00:21:24.208562  519254 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/proxy-client.crt: {Name:mke05aa8bbc84220899290753b3bcb148d9886eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:21:24.208746  519254 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/proxy-client.key ...
	I1212 00:21:24.208754  519254 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/proxy-client.key: {Name:mk20a218ee4d83162846cc78d338172e927c762e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:21:24.208935  519254 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/490954.pem (1338 bytes)
	W1212 00:21:24.208974  519254 certs.go:480] ignoring /home/jenkins/minikube-integration/22101-487723/.minikube/certs/490954_empty.pem, impossibly tiny 0 bytes
	I1212 00:21:24.208982  519254 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca-key.pem (1679 bytes)
	I1212 00:21:24.209014  519254 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca.pem (1078 bytes)
	I1212 00:21:24.209038  519254 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/cert.pem (1123 bytes)
	I1212 00:21:24.209060  519254 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/key.pem (1679 bytes)
	I1212 00:21:24.209109  519254 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-487723/.minikube/files/etc/ssl/certs/4909542.pem (1708 bytes)
	I1212 00:21:24.209674  519254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 00:21:24.236248  519254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 00:21:24.264325  519254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 00:21:24.288684  519254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 00:21:24.305969  519254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1212 00:21:24.324478  519254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 00:21:24.341727  519254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 00:21:24.359645  519254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 00:21:24.377308  519254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/files/etc/ssl/certs/4909542.pem --> /usr/share/ca-certificates/4909542.pem (1708 bytes)
	I1212 00:21:24.395547  519254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 00:21:24.412835  519254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/certs/490954.pem --> /usr/share/ca-certificates/490954.pem (1338 bytes)
	I1212 00:21:24.431371  519254 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 00:21:24.444475  519254 ssh_runner.go:195] Run: openssl version
	I1212 00:21:24.451216  519254 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:21:24.459720  519254 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 00:21:24.467186  519254 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:21:24.470940  519254 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 00:11 /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:21:24.470996  519254 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:21:24.511725  519254 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 00:21:24.519137  519254 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1212 00:21:24.526339  519254 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/490954.pem
	I1212 00:21:24.533445  519254 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/490954.pem /etc/ssl/certs/490954.pem
	I1212 00:21:24.540525  519254 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/490954.pem
	I1212 00:21:24.544251  519254 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 00:21 /usr/share/ca-certificates/490954.pem
	I1212 00:21:24.544307  519254 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/490954.pem
	I1212 00:21:24.586226  519254 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 00:21:24.593270  519254 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/490954.pem /etc/ssl/certs/51391683.0
	I1212 00:21:24.600276  519254 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4909542.pem
	I1212 00:21:24.607187  519254 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4909542.pem /etc/ssl/certs/4909542.pem
	I1212 00:21:24.614258  519254 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4909542.pem
	I1212 00:21:24.618029  519254 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 00:21 /usr/share/ca-certificates/4909542.pem
	I1212 00:21:24.618086  519254 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4909542.pem
	I1212 00:21:24.658677  519254 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 00:21:24.665918  519254 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4909542.pem /etc/ssl/certs/3ec20f2e.0
	I1212 00:21:24.673191  519254 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 00:21:24.676581  519254 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1212 00:21:24.676623  519254 kubeadm.go:401] StartCluster: {Name:functional-035643 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-035643 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:21:24.676690  519254 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 00:21:24.676758  519254 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 00:21:24.708311  519254 cri.go:89] found id: ""
	I1212 00:21:24.708372  519254 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 00:21:24.716243  519254 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 00:21:24.723658  519254 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 00:21:24.723726  519254 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 00:21:24.731486  519254 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 00:21:24.731496  519254 kubeadm.go:158] found existing configuration files:
	
	I1212 00:21:24.731557  519254 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1212 00:21:24.739591  519254 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 00:21:24.739661  519254 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 00:21:24.747036  519254 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1212 00:21:24.754639  519254 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 00:21:24.754718  519254 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 00:21:24.762033  519254 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1212 00:21:24.769560  519254 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 00:21:24.769623  519254 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 00:21:24.777097  519254 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1212 00:21:24.784959  519254 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 00:21:24.785023  519254 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 00:21:24.792657  519254 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 00:21:24.896959  519254 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1212 00:21:24.897378  519254 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1212 00:21:24.978654  519254 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 00:25:30.277583  519254 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1212 00:25:30.277750  519254 kubeadm.go:319] 
	I1212 00:25:30.277874  519254 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1212 00:25:30.282837  519254 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 00:25:30.282887  519254 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 00:25:30.282974  519254 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 00:25:30.283036  519254 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1212 00:25:30.283078  519254 kubeadm.go:319] OS: Linux
	I1212 00:25:30.283136  519254 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 00:25:30.283190  519254 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 00:25:30.283237  519254 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 00:25:30.283288  519254 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 00:25:30.283340  519254 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 00:25:30.283391  519254 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 00:25:30.283442  519254 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 00:25:30.283492  519254 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 00:25:30.283543  519254 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 00:25:30.283621  519254 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 00:25:30.283728  519254 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 00:25:30.283824  519254 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 00:25:30.283892  519254 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 00:25:30.286768  519254 out.go:252]   - Generating certificates and keys ...
	I1212 00:25:30.286855  519254 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 00:25:30.286925  519254 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 00:25:30.286992  519254 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 00:25:30.287048  519254 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1212 00:25:30.287107  519254 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1212 00:25:30.287163  519254 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1212 00:25:30.287222  519254 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1212 00:25:30.287342  519254 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [functional-035643 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1212 00:25:30.287394  519254 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1212 00:25:30.287539  519254 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [functional-035643 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1212 00:25:30.287619  519254 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 00:25:30.287682  519254 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 00:25:30.287725  519254 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1212 00:25:30.287780  519254 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 00:25:30.287830  519254 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 00:25:30.287886  519254 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 00:25:30.287941  519254 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 00:25:30.288017  519254 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 00:25:30.288070  519254 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 00:25:30.288150  519254 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 00:25:30.288219  519254 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 00:25:30.293011  519254 out.go:252]   - Booting up control plane ...
	I1212 00:25:30.293121  519254 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 00:25:30.293199  519254 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 00:25:30.293285  519254 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 00:25:30.293406  519254 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 00:25:30.293507  519254 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 00:25:30.293636  519254 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 00:25:30.293725  519254 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 00:25:30.293763  519254 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 00:25:30.293903  519254 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 00:25:30.294013  519254 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 00:25:30.294077  519254 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000281571s
	I1212 00:25:30.294081  519254 kubeadm.go:319] 
	I1212 00:25:30.294141  519254 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 00:25:30.294173  519254 kubeadm.go:319] 	- The kubelet is not running
	I1212 00:25:30.294282  519254 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 00:25:30.294286  519254 kubeadm.go:319] 
	I1212 00:25:30.294400  519254 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 00:25:30.294437  519254 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 00:25:30.294467  519254 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 00:25:30.294509  519254 kubeadm.go:319] 
	W1212 00:25:30.294596  519254 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-035643 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-035643 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000281571s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1212 00:25:30.294869  519254 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1212 00:25:30.702896  519254 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:25:30.715586  519254 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 00:25:30.715640  519254 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 00:25:30.723346  519254 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 00:25:30.723357  519254 kubeadm.go:158] found existing configuration files:
	
	I1212 00:25:30.723408  519254 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1212 00:25:30.730788  519254 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 00:25:30.730842  519254 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 00:25:30.738103  519254 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1212 00:25:30.745797  519254 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 00:25:30.745862  519254 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 00:25:30.753070  519254 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1212 00:25:30.760741  519254 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 00:25:30.760796  519254 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 00:25:30.768021  519254 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1212 00:25:30.775936  519254 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 00:25:30.775991  519254 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 00:25:30.783262  519254 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 00:25:30.824982  519254 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 00:25:30.825272  519254 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 00:25:30.898751  519254 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 00:25:30.898814  519254 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1212 00:25:30.898848  519254 kubeadm.go:319] OS: Linux
	I1212 00:25:30.898891  519254 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 00:25:30.898938  519254 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 00:25:30.898984  519254 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 00:25:30.899031  519254 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 00:25:30.899078  519254 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 00:25:30.899137  519254 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 00:25:30.899180  519254 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 00:25:30.899227  519254 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 00:25:30.899274  519254 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 00:25:30.967794  519254 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 00:25:30.967897  519254 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 00:25:30.967986  519254 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 00:25:30.979146  519254 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 00:25:30.984228  519254 out.go:252]   - Generating certificates and keys ...
	I1212 00:25:30.984319  519254 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 00:25:30.984396  519254 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 00:25:30.984495  519254 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 00:25:30.984562  519254 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1212 00:25:30.984643  519254 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 00:25:30.984708  519254 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1212 00:25:30.984782  519254 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1212 00:25:30.984864  519254 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1212 00:25:30.984946  519254 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 00:25:30.985030  519254 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 00:25:30.985071  519254 kubeadm.go:319] [certs] Using the existing "sa" key
	I1212 00:25:30.985130  519254 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 00:25:31.565961  519254 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 00:25:31.707841  519254 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 00:25:32.113751  519254 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 00:25:32.334874  519254 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 00:25:32.746739  519254 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 00:25:32.747460  519254 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 00:25:32.750345  519254 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 00:25:32.753473  519254 out.go:252]   - Booting up control plane ...
	I1212 00:25:32.753575  519254 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 00:25:32.753968  519254 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 00:25:32.755335  519254 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 00:25:32.769860  519254 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 00:25:32.769962  519254 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 00:25:32.777337  519254 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 00:25:32.777749  519254 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 00:25:32.778015  519254 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 00:25:32.904887  519254 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 00:25:32.904999  519254 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 00:29:32.904942  519254 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00032218s
	I1212 00:29:32.904962  519254 kubeadm.go:319] 
	I1212 00:29:32.905025  519254 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 00:29:32.905067  519254 kubeadm.go:319] 	- The kubelet is not running
	I1212 00:29:32.905171  519254 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 00:29:32.905175  519254 kubeadm.go:319] 
	I1212 00:29:32.905680  519254 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 00:29:32.905750  519254 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 00:29:32.905804  519254 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 00:29:32.905808  519254 kubeadm.go:319] 
	I1212 00:29:32.911196  519254 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1212 00:29:32.911658  519254 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1212 00:29:32.911759  519254 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 00:29:32.911978  519254 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1212 00:29:32.911981  519254 kubeadm.go:319] 
	I1212 00:29:32.912048  519254 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1212 00:29:32.912108  519254 kubeadm.go:403] duration metric: took 8m8.235486568s to StartCluster
	I1212 00:29:32.912158  519254 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:29:32.912219  519254 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:29:32.938617  519254 cri.go:89] found id: ""
	I1212 00:29:32.938643  519254 logs.go:282] 0 containers: []
	W1212 00:29:32.938650  519254 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:29:32.938657  519254 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:29:32.938730  519254 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:29:32.963926  519254 cri.go:89] found id: ""
	I1212 00:29:32.963939  519254 logs.go:282] 0 containers: []
	W1212 00:29:32.963947  519254 logs.go:284] No container was found matching "etcd"
	I1212 00:29:32.963952  519254 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:29:32.964010  519254 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:29:32.992282  519254 cri.go:89] found id: ""
	I1212 00:29:32.992296  519254 logs.go:282] 0 containers: []
	W1212 00:29:32.992304  519254 logs.go:284] No container was found matching "coredns"
	I1212 00:29:32.992309  519254 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:29:32.992368  519254 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:29:33.019739  519254 cri.go:89] found id: ""
	I1212 00:29:33.019753  519254 logs.go:282] 0 containers: []
	W1212 00:29:33.019760  519254 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:29:33.019766  519254 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:29:33.019825  519254 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:29:33.045289  519254 cri.go:89] found id: ""
	I1212 00:29:33.045303  519254 logs.go:282] 0 containers: []
	W1212 00:29:33.045310  519254 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:29:33.045319  519254 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:29:33.045384  519254 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:29:33.072054  519254 cri.go:89] found id: ""
	I1212 00:29:33.072069  519254 logs.go:282] 0 containers: []
	W1212 00:29:33.072076  519254 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:29:33.072081  519254 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:29:33.072142  519254 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:29:33.097470  519254 cri.go:89] found id: ""
	I1212 00:29:33.097484  519254 logs.go:282] 0 containers: []
	W1212 00:29:33.097491  519254 logs.go:284] No container was found matching "kindnet"
	I1212 00:29:33.097499  519254 logs.go:123] Gathering logs for kubelet ...
	I1212 00:29:33.097509  519254 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:29:33.161822  519254 logs.go:123] Gathering logs for dmesg ...
	I1212 00:29:33.161842  519254 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:29:33.177758  519254 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:29:33.177774  519254 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:29:33.268928  519254 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:29:33.258118    4815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:29:33.258531    4815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:29:33.262541    4815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:29:33.263235    4815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:29:33.264853    4815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:29:33.258118    4815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:29:33.258531    4815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:29:33.262541    4815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:29:33.263235    4815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:29:33.264853    4815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:29:33.268944  519254 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:29:33.268954  519254 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:29:33.303601  519254 logs.go:123] Gathering logs for container status ...
	I1212 00:29:33.303620  519254 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 00:29:33.330895  519254 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00032218s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1212 00:29:33.330939  519254 out.go:285] * 
	W1212 00:29:33.331046  519254 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00032218s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 00:29:33.331109  519254 out.go:285] * 
	W1212 00:29:33.333338  519254 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 00:29:33.338640  519254 out.go:203] 
	W1212 00:29:33.341488  519254 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00032218s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 00:29:33.341534  519254 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1212 00:29:33.341556  519254 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1212 00:29:33.344592  519254 out.go:203] 
	
	
	==> CRI-O <==
	Dec 12 00:21:23 functional-035643 crio[843]: time="2025-12-12T00:21:23.039770208Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 12 00:21:23 functional-035643 crio[843]: time="2025-12-12T00:21:23.039928137Z" level=info msg="Starting seccomp notifier watcher"
	Dec 12 00:21:23 functional-035643 crio[843]: time="2025-12-12T00:21:23.039985022Z" level=info msg="Create NRI interface"
	Dec 12 00:21:23 functional-035643 crio[843]: time="2025-12-12T00:21:23.040098512Z" level=info msg="built-in NRI default validator is disabled"
	Dec 12 00:21:23 functional-035643 crio[843]: time="2025-12-12T00:21:23.040115472Z" level=info msg="runtime interface created"
	Dec 12 00:21:23 functional-035643 crio[843]: time="2025-12-12T00:21:23.040130364Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 12 00:21:23 functional-035643 crio[843]: time="2025-12-12T00:21:23.040137822Z" level=info msg="runtime interface starting up..."
	Dec 12 00:21:23 functional-035643 crio[843]: time="2025-12-12T00:21:23.040144419Z" level=info msg="starting plugins..."
	Dec 12 00:21:23 functional-035643 crio[843]: time="2025-12-12T00:21:23.040157572Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 12 00:21:23 functional-035643 crio[843]: time="2025-12-12T00:21:23.040223022Z" level=info msg="No systemd watchdog enabled"
	Dec 12 00:21:23 functional-035643 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 12 00:21:24 functional-035643 crio[843]: time="2025-12-12T00:21:24.984400119Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=721d2936-f9e2-40c7-9907-6c8206eb2520 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:21:24 functional-035643 crio[843]: time="2025-12-12T00:21:24.985180884Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=1b1c8c3c-db4c-44d2-a66e-b44ad8cc357c name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:21:24 functional-035643 crio[843]: time="2025-12-12T00:21:24.985684701Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=12f8ea5e-796b-4f0f-8b38-36da7f175741 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:21:24 functional-035643 crio[843]: time="2025-12-12T00:21:24.986244704Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=2f60dc7a-87e2-4104-9d81-d7622852d174 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:21:24 functional-035643 crio[843]: time="2025-12-12T00:21:24.986739741Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=7a3055a2-2741-43e2-bc47-14a87e38301a name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:21:24 functional-035643 crio[843]: time="2025-12-12T00:21:24.987329964Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=8394f71f-8949-491f-8047-6d88ec70eb91 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:21:24 functional-035643 crio[843]: time="2025-12-12T00:21:24.987916404Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=ba7c4c48-e20c-4276-bd74-74ea76c95fb3 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:25:30 functional-035643 crio[843]: time="2025-12-12T00:25:30.971028479Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=f5077063-643f-4bcf-b9ec-9835229b2fb0 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:25:30 functional-035643 crio[843]: time="2025-12-12T00:25:30.971781102Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=d575e2cd-f053-435b-bb1c-b52163f03b6f name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:25:30 functional-035643 crio[843]: time="2025-12-12T00:25:30.972282227Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=8c01b644-f74f-468d-b804-14962d7afb9e name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:25:30 functional-035643 crio[843]: time="2025-12-12T00:25:30.97278743Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=b9d35434-a7db-48a3-a659-444757c5c1f4 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:25:30 functional-035643 crio[843]: time="2025-12-12T00:25:30.973299221Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=0c416e1f-fbcc-47d3-8649-76ea89de6057 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:25:30 functional-035643 crio[843]: time="2025-12-12T00:25:30.973797761Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=f1b29322-5d28-4cc2-abd1-9fdcad95f171 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:25:30 functional-035643 crio[843]: time="2025-12-12T00:25:30.974277381Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=936610ab-15e7-4591-b8c3-465b75778b82 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:29:34.298393    4937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:29:34.298935    4937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:29:34.300681    4937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:29:34.301045    4937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:29:34.302861    4937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec11 23:45] hrtimer: interrupt took 13740716 ns
	[Dec12 00:10] kauditd_printk_skb: 8 callbacks suppressed
	[Dec12 00:11] overlayfs: idmapped layers are currently not supported
	[  +0.124336] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec12 00:17] overlayfs: idmapped layers are currently not supported
	[Dec12 00:18] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 00:29:34 up  3:11,  0 user,  load average: 0.10, 0.44, 1.05
	Linux functional-035643 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 12 00:29:31 functional-035643 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 00:29:32 functional-035643 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 647.
	Dec 12 00:29:32 functional-035643 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:29:32 functional-035643 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:29:32 functional-035643 kubelet[4748]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 00:29:32 functional-035643 kubelet[4748]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 00:29:32 functional-035643 kubelet[4748]: E1212 00:29:32.516888    4748 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 00:29:32 functional-035643 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 00:29:32 functional-035643 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 00:29:33 functional-035643 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 648.
	Dec 12 00:29:33 functional-035643 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:29:33 functional-035643 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:29:33 functional-035643 kubelet[4819]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 00:29:33 functional-035643 kubelet[4819]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 00:29:33 functional-035643 kubelet[4819]: E1212 00:29:33.282400    4819 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 00:29:33 functional-035643 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 00:29:33 functional-035643 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 00:29:33 functional-035643 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 649.
	Dec 12 00:29:33 functional-035643 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:29:33 functional-035643 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:29:34 functional-035643 kubelet[4860]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 00:29:34 functional-035643 kubelet[4860]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 00:29:34 functional-035643 kubelet[4860]: E1212 00:29:34.026452    4860 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 00:29:34 functional-035643 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 00:29:34 functional-035643 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-035643 -n functional-035643
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-035643 -n functional-035643: exit status 6 (358.66103ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 00:29:34.775863  524989 status.go:458] kubeconfig endpoint: get endpoint: "functional-035643" does not appear in /home/jenkins/minikube-integration/22101-487723/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "functional-035643" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (503.21s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (369.46s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart
I1212 00:29:34.791536  490954 config.go:182] Loaded profile config "functional-035643": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-035643 --alsologtostderr -v=8
E1212 00:30:17.605458  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-921447/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:30:45.310234  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-921447/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:33:33.590413  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/addons-199484/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:34:56.662157  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/addons-199484/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:35:17.605453  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-921447/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-035643 --alsologtostderr -v=8: exit status 80 (6m6.204574831s)

                                                
                                                
-- stdout --
	* [functional-035643] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22101
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22101-487723/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-487723/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-035643" primary control-plane node in "functional-035643" cluster
	* Pulling base image v0.0.48-1765275396-22083 ...
	* Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 00:29:34.833608  525066 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:29:34.833799  525066 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:29:34.833830  525066 out.go:374] Setting ErrFile to fd 2...
	I1212 00:29:34.833859  525066 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:29:34.834244  525066 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-487723/.minikube/bin
	I1212 00:29:34.834787  525066 out.go:368] Setting JSON to false
	I1212 00:29:34.835727  525066 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":11520,"bootTime":1765487855,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1212 00:29:34.836335  525066 start.go:143] virtualization:  
	I1212 00:29:34.841302  525066 out.go:179] * [functional-035643] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 00:29:34.846669  525066 out.go:179]   - MINIKUBE_LOCATION=22101
	I1212 00:29:34.846785  525066 notify.go:221] Checking for updates...
	I1212 00:29:34.852399  525066 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:29:34.855222  525066 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22101-487723/kubeconfig
	I1212 00:29:34.857924  525066 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-487723/.minikube
	I1212 00:29:34.860585  525066 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 00:29:34.863145  525066 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:29:34.866639  525066 config.go:182] Loaded profile config "functional-035643": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 00:29:34.866818  525066 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 00:29:34.892569  525066 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 00:29:34.892680  525066 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:29:34.954074  525066 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 00:29:34.944774098 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 00:29:34.954186  525066 docker.go:319] overlay module found
	I1212 00:29:34.958427  525066 out.go:179] * Using the docker driver based on existing profile
	I1212 00:29:34.960983  525066 start.go:309] selected driver: docker
	I1212 00:29:34.961005  525066 start.go:927] validating driver "docker" against &{Name:functional-035643 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-035643 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:29:34.961104  525066 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:29:34.961212  525066 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:29:35.019269  525066 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 00:29:35.008770771 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 00:29:35.019716  525066 cni.go:84] Creating CNI manager for ""
	I1212 00:29:35.019778  525066 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 00:29:35.019842  525066 start.go:353] cluster config:
	{Name:functional-035643 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-035643 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:29:35.022879  525066 out.go:179] * Starting "functional-035643" primary control-plane node in "functional-035643" cluster
	I1212 00:29:35.025659  525066 cache.go:134] Beginning downloading kic base image for docker with crio
	I1212 00:29:35.028463  525066 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1212 00:29:35.031434  525066 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1212 00:29:35.031495  525066 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22101-487723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1212 00:29:35.031510  525066 cache.go:65] Caching tarball of preloaded images
	I1212 00:29:35.031544  525066 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1212 00:29:35.031603  525066 preload.go:238] Found /home/jenkins/minikube-integration/22101-487723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1212 00:29:35.031614  525066 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1212 00:29:35.031729  525066 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/config.json ...
	I1212 00:29:35.051219  525066 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1212 00:29:35.051245  525066 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1212 00:29:35.051267  525066 cache.go:243] Successfully downloaded all kic artifacts
	I1212 00:29:35.051303  525066 start.go:360] acquireMachinesLock for functional-035643: {Name:mkb0cdc7d354412594dc63c0234fde00134e758d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 00:29:35.051387  525066 start.go:364] duration metric: took 54.908µs to acquireMachinesLock for "functional-035643"
	I1212 00:29:35.051416  525066 start.go:96] Skipping create...Using existing machine configuration
	I1212 00:29:35.051428  525066 fix.go:54] fixHost starting: 
	I1212 00:29:35.051696  525066 cli_runner.go:164] Run: docker container inspect functional-035643 --format={{.State.Status}}
	I1212 00:29:35.069320  525066 fix.go:112] recreateIfNeeded on functional-035643: state=Running err=<nil>
	W1212 00:29:35.069352  525066 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 00:29:35.072554  525066 out.go:252] * Updating the running docker "functional-035643" container ...
	I1212 00:29:35.072600  525066 machine.go:94] provisionDockerMachine start ...
	I1212 00:29:35.072693  525066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
	I1212 00:29:35.090330  525066 main.go:143] libmachine: Using SSH client type: native
	I1212 00:29:35.090669  525066 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1212 00:29:35.090706  525066 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 00:29:35.238363  525066 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-035643
	
	I1212 00:29:35.238387  525066 ubuntu.go:182] provisioning hostname "functional-035643"
	I1212 00:29:35.238453  525066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
	I1212 00:29:35.256201  525066 main.go:143] libmachine: Using SSH client type: native
	I1212 00:29:35.256511  525066 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1212 00:29:35.256528  525066 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-035643 && echo "functional-035643" | sudo tee /etc/hostname
	I1212 00:29:35.418094  525066 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-035643
	
	I1212 00:29:35.418176  525066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
	I1212 00:29:35.436164  525066 main.go:143] libmachine: Using SSH client type: native
	I1212 00:29:35.436475  525066 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1212 00:29:35.436494  525066 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-035643' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-035643/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-035643' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 00:29:35.594938  525066 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 00:29:35.594969  525066 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22101-487723/.minikube CaCertPath:/home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22101-487723/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22101-487723/.minikube}
	I1212 00:29:35.595009  525066 ubuntu.go:190] setting up certificates
	I1212 00:29:35.595026  525066 provision.go:84] configureAuth start
	I1212 00:29:35.595111  525066 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-035643
	I1212 00:29:35.612398  525066 provision.go:143] copyHostCerts
	I1212 00:29:35.612439  525066 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22101-487723/.minikube/ca.pem
	I1212 00:29:35.612482  525066 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-487723/.minikube/ca.pem, removing ...
	I1212 00:29:35.612494  525066 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-487723/.minikube/ca.pem
	I1212 00:29:35.612571  525066 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22101-487723/.minikube/ca.pem (1078 bytes)
	I1212 00:29:35.612671  525066 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22101-487723/.minikube/cert.pem
	I1212 00:29:35.612699  525066 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-487723/.minikube/cert.pem, removing ...
	I1212 00:29:35.612707  525066 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-487723/.minikube/cert.pem
	I1212 00:29:35.612734  525066 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22101-487723/.minikube/cert.pem (1123 bytes)
	I1212 00:29:35.612781  525066 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22101-487723/.minikube/key.pem
	I1212 00:29:35.612802  525066 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-487723/.minikube/key.pem, removing ...
	I1212 00:29:35.612813  525066 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-487723/.minikube/key.pem
	I1212 00:29:35.612837  525066 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22101-487723/.minikube/key.pem (1679 bytes)
	I1212 00:29:35.612889  525066 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22101-487723/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca-key.pem org=jenkins.functional-035643 san=[127.0.0.1 192.168.49.2 functional-035643 localhost minikube]
	I1212 00:29:35.977748  525066 provision.go:177] copyRemoteCerts
	I1212 00:29:35.977818  525066 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 00:29:35.977857  525066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
	I1212 00:29:35.995348  525066 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/functional-035643/id_rsa Username:docker}
	I1212 00:29:36.106772  525066 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1212 00:29:36.106859  525066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 00:29:36.126035  525066 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1212 00:29:36.126112  525066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 00:29:36.143996  525066 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1212 00:29:36.144114  525066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 00:29:36.161387  525066 provision.go:87] duration metric: took 566.343959ms to configureAuth
	I1212 00:29:36.161415  525066 ubuntu.go:206] setting minikube options for container-runtime
	I1212 00:29:36.161612  525066 config.go:182] Loaded profile config "functional-035643": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 00:29:36.161722  525066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
	I1212 00:29:36.179565  525066 main.go:143] libmachine: Using SSH client type: native
	I1212 00:29:36.179872  525066 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1212 00:29:36.179896  525066 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 00:29:36.525259  525066 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 00:29:36.525285  525066 machine.go:97] duration metric: took 1.45267532s to provisionDockerMachine
	I1212 00:29:36.525297  525066 start.go:293] postStartSetup for "functional-035643" (driver="docker")
	I1212 00:29:36.525310  525066 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 00:29:36.525385  525066 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 00:29:36.525432  525066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
	I1212 00:29:36.544323  525066 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/functional-035643/id_rsa Username:docker}
	I1212 00:29:36.650745  525066 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 00:29:36.654027  525066 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1212 00:29:36.654058  525066 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1212 00:29:36.654063  525066 command_runner.go:130] > VERSION_ID="12"
	I1212 00:29:36.654067  525066 command_runner.go:130] > VERSION="12 (bookworm)"
	I1212 00:29:36.654072  525066 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1212 00:29:36.654076  525066 command_runner.go:130] > ID=debian
	I1212 00:29:36.654081  525066 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1212 00:29:36.654086  525066 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1212 00:29:36.654098  525066 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1212 00:29:36.654164  525066 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 00:29:36.654184  525066 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 00:29:36.654203  525066 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-487723/.minikube/addons for local assets ...
	I1212 00:29:36.654261  525066 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-487723/.minikube/files for local assets ...
	I1212 00:29:36.654368  525066 filesync.go:149] local asset: /home/jenkins/minikube-integration/22101-487723/.minikube/files/etc/ssl/certs/4909542.pem -> 4909542.pem in /etc/ssl/certs
	I1212 00:29:36.654379  525066 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/files/etc/ssl/certs/4909542.pem -> /etc/ssl/certs/4909542.pem
	I1212 00:29:36.654462  525066 filesync.go:149] local asset: /home/jenkins/minikube-integration/22101-487723/.minikube/files/etc/test/nested/copy/490954/hosts -> hosts in /etc/test/nested/copy/490954
	I1212 00:29:36.654470  525066 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/files/etc/test/nested/copy/490954/hosts -> /etc/test/nested/copy/490954/hosts
	I1212 00:29:36.654523  525066 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/490954
	I1212 00:29:36.661942  525066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/files/etc/ssl/certs/4909542.pem --> /etc/ssl/certs/4909542.pem (1708 bytes)
	I1212 00:29:36.678936  525066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/files/etc/test/nested/copy/490954/hosts --> /etc/test/nested/copy/490954/hosts (40 bytes)
	I1212 00:29:36.696209  525066 start.go:296] duration metric: took 170.896684ms for postStartSetup
	I1212 00:29:36.696330  525066 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 00:29:36.696401  525066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
	I1212 00:29:36.716202  525066 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/functional-035643/id_rsa Username:docker}
	I1212 00:29:36.819154  525066 command_runner.go:130] > 18%
	I1212 00:29:36.819742  525066 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 00:29:36.823869  525066 command_runner.go:130] > 160G
	I1212 00:29:36.824320  525066 fix.go:56] duration metric: took 1.772888094s for fixHost
	I1212 00:29:36.824342  525066 start.go:83] releasing machines lock for "functional-035643", held for 1.772938226s
	I1212 00:29:36.824419  525066 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-035643
	I1212 00:29:36.841414  525066 ssh_runner.go:195] Run: cat /version.json
	I1212 00:29:36.841444  525066 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 00:29:36.841465  525066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
	I1212 00:29:36.841499  525066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
	I1212 00:29:36.858975  525066 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/functional-035643/id_rsa Username:docker}
	I1212 00:29:36.864277  525066 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/functional-035643/id_rsa Username:docker}
	I1212 00:29:37.063000  525066 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1212 00:29:37.063067  525066 command_runner.go:130] > {"iso_version": "v1.37.0-1765151505-21409", "kicbase_version": "v0.0.48-1765275396-22083", "minikube_version": "v1.37.0", "commit": "9f3959633d311997d75aab86f8ff840f224c6486"}
	I1212 00:29:37.063223  525066 ssh_runner.go:195] Run: systemctl --version
	I1212 00:29:37.069375  525066 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1212 00:29:37.069421  525066 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1212 00:29:37.069789  525066 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 00:29:37.107153  525066 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1212 00:29:37.111099  525066 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1212 00:29:37.111476  525066 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 00:29:37.111538  525066 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 00:29:37.119321  525066 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 00:29:37.119346  525066 start.go:496] detecting cgroup driver to use...
	I1212 00:29:37.119377  525066 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 00:29:37.119429  525066 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 00:29:37.134288  525066 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 00:29:37.147114  525066 docker.go:218] disabling cri-docker service (if available) ...
	I1212 00:29:37.147210  525066 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 00:29:37.162260  525066 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 00:29:37.175226  525066 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 00:29:37.287755  525066 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 00:29:37.404746  525066 docker.go:234] disabling docker service ...
	I1212 00:29:37.404828  525066 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 00:29:37.419834  525066 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 00:29:37.433027  525066 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 00:29:37.553874  525066 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 00:29:37.677379  525066 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 00:29:37.696856  525066 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 00:29:37.711415  525066 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1212 00:29:37.712568  525066 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1212 00:29:37.712642  525066 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:29:37.724126  525066 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 00:29:37.724197  525066 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:29:37.733568  525066 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:29:37.743368  525066 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:29:37.752442  525066 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 00:29:37.761570  525066 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:29:37.771444  525066 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:29:37.780014  525066 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:29:37.788901  525066 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 00:29:37.795786  525066 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1212 00:29:37.796743  525066 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 00:29:37.804315  525066 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:29:37.916494  525066 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 00:29:38.098236  525066 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 00:29:38.098362  525066 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 00:29:38.102398  525066 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1212 00:29:38.102430  525066 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1212 00:29:38.102438  525066 command_runner.go:130] > Device: 0,72	Inode: 1642        Links: 1
	I1212 00:29:38.102445  525066 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 00:29:38.102451  525066 command_runner.go:130] > Access: 2025-12-12 00:29:38.034542795 +0000
	I1212 00:29:38.102458  525066 command_runner.go:130] > Modify: 2025-12-12 00:29:38.034542795 +0000
	I1212 00:29:38.102463  525066 command_runner.go:130] > Change: 2025-12-12 00:29:38.034542795 +0000
	I1212 00:29:38.102467  525066 command_runner.go:130] >  Birth: -
	I1212 00:29:38.102500  525066 start.go:564] Will wait 60s for crictl version
	I1212 00:29:38.102554  525066 ssh_runner.go:195] Run: which crictl
	I1212 00:29:38.105961  525066 command_runner.go:130] > /usr/local/bin/crictl
	I1212 00:29:38.106209  525066 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 00:29:38.130147  525066 command_runner.go:130] > Version:  0.1.0
	I1212 00:29:38.130215  525066 command_runner.go:130] > RuntimeName:  cri-o
	I1212 00:29:38.130236  525066 command_runner.go:130] > RuntimeVersion:  1.34.3
	I1212 00:29:38.130255  525066 command_runner.go:130] > RuntimeApiVersion:  v1
	I1212 00:29:38.130299  525066 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1212 00:29:38.130400  525066 ssh_runner.go:195] Run: crio --version
	I1212 00:29:38.156955  525066 command_runner.go:130] > crio version 1.34.3
	I1212 00:29:38.157026  525066 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1212 00:29:38.157055  525066 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1212 00:29:38.157075  525066 command_runner.go:130] >    GitTreeState:   dirty
	I1212 00:29:38.157101  525066 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1212 00:29:38.157125  525066 command_runner.go:130] >    GoVersion:      go1.24.6
	I1212 00:29:38.157142  525066 command_runner.go:130] >    Compiler:       gc
	I1212 00:29:38.157162  525066 command_runner.go:130] >    Platform:       linux/arm64
	I1212 00:29:38.157188  525066 command_runner.go:130] >    Linkmode:       static
	I1212 00:29:38.157205  525066 command_runner.go:130] >    BuildTags:
	I1212 00:29:38.157231  525066 command_runner.go:130] >      static
	I1212 00:29:38.157260  525066 command_runner.go:130] >      netgo
	I1212 00:29:38.157278  525066 command_runner.go:130] >      osusergo
	I1212 00:29:38.157296  525066 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1212 00:29:38.157309  525066 command_runner.go:130] >      seccomp
	I1212 00:29:38.157334  525066 command_runner.go:130] >      apparmor
	I1212 00:29:38.157350  525066 command_runner.go:130] >      selinux
	I1212 00:29:38.157366  525066 command_runner.go:130] >    LDFlags:          unknown
	I1212 00:29:38.157384  525066 command_runner.go:130] >    SeccompEnabled:   true
	I1212 00:29:38.157415  525066 command_runner.go:130] >    AppArmorEnabled:  false
	I1212 00:29:38.159818  525066 ssh_runner.go:195] Run: crio --version
	I1212 00:29:38.187365  525066 command_runner.go:130] > crio version 1.34.3
	I1212 00:29:38.187391  525066 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1212 00:29:38.187398  525066 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1212 00:29:38.187403  525066 command_runner.go:130] >    GitTreeState:   dirty
	I1212 00:29:38.187408  525066 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1212 00:29:38.187414  525066 command_runner.go:130] >    GoVersion:      go1.24.6
	I1212 00:29:38.187418  525066 command_runner.go:130] >    Compiler:       gc
	I1212 00:29:38.187438  525066 command_runner.go:130] >    Platform:       linux/arm64
	I1212 00:29:38.187447  525066 command_runner.go:130] >    Linkmode:       static
	I1212 00:29:38.187451  525066 command_runner.go:130] >    BuildTags:
	I1212 00:29:38.187455  525066 command_runner.go:130] >      static
	I1212 00:29:38.187459  525066 command_runner.go:130] >      netgo
	I1212 00:29:38.187463  525066 command_runner.go:130] >      osusergo
	I1212 00:29:38.187468  525066 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1212 00:29:38.187481  525066 command_runner.go:130] >      seccomp
	I1212 00:29:38.187489  525066 command_runner.go:130] >      apparmor
	I1212 00:29:38.187494  525066 command_runner.go:130] >      selinux
	I1212 00:29:38.187502  525066 command_runner.go:130] >    LDFlags:          unknown
	I1212 00:29:38.187507  525066 command_runner.go:130] >    SeccompEnabled:   true
	I1212 00:29:38.187511  525066 command_runner.go:130] >    AppArmorEnabled:  false
	I1212 00:29:38.193058  525066 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1212 00:29:38.195137  525066 cli_runner.go:164] Run: docker network inspect functional-035643 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 00:29:38.211553  525066 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1212 00:29:38.215227  525066 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1212 00:29:38.215507  525066 kubeadm.go:884] updating cluster {Name:functional-035643 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-035643 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 00:29:38.215633  525066 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1212 00:29:38.215688  525066 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:29:38.248801  525066 command_runner.go:130] > {
	I1212 00:29:38.248822  525066 command_runner.go:130] >   "images":  [
	I1212 00:29:38.248827  525066 command_runner.go:130] >     {
	I1212 00:29:38.248837  525066 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1212 00:29:38.248842  525066 command_runner.go:130] >       "repoTags":  [
	I1212 00:29:38.248851  525066 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1212 00:29:38.248855  525066 command_runner.go:130] >       ],
	I1212 00:29:38.248859  525066 command_runner.go:130] >       "repoDigests":  [
	I1212 00:29:38.248869  525066 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1212 00:29:38.248877  525066 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1212 00:29:38.248880  525066 command_runner.go:130] >       ],
	I1212 00:29:38.248885  525066 command_runner.go:130] >       "size":  "111333938",
	I1212 00:29:38.248893  525066 command_runner.go:130] >       "username":  "",
	I1212 00:29:38.248898  525066 command_runner.go:130] >       "pinned":  false
	I1212 00:29:38.248901  525066 command_runner.go:130] >     },
	I1212 00:29:38.248905  525066 command_runner.go:130] >     {
	I1212 00:29:38.248911  525066 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1212 00:29:38.248926  525066 command_runner.go:130] >       "repoTags":  [
	I1212 00:29:38.248931  525066 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1212 00:29:38.248935  525066 command_runner.go:130] >       ],
	I1212 00:29:38.248939  525066 command_runner.go:130] >       "repoDigests":  [
	I1212 00:29:38.248951  525066 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1212 00:29:38.248960  525066 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1212 00:29:38.248967  525066 command_runner.go:130] >       ],
	I1212 00:29:38.248971  525066 command_runner.go:130] >       "size":  "29037500",
	I1212 00:29:38.248975  525066 command_runner.go:130] >       "username":  "",
	I1212 00:29:38.248983  525066 command_runner.go:130] >       "pinned":  false
	I1212 00:29:38.248987  525066 command_runner.go:130] >     },
	I1212 00:29:38.248990  525066 command_runner.go:130] >     {
	I1212 00:29:38.248998  525066 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1212 00:29:38.249004  525066 command_runner.go:130] >       "repoTags":  [
	I1212 00:29:38.249018  525066 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1212 00:29:38.249026  525066 command_runner.go:130] >       ],
	I1212 00:29:38.249036  525066 command_runner.go:130] >       "repoDigests":  [
	I1212 00:29:38.249044  525066 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1212 00:29:38.249058  525066 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1212 00:29:38.249061  525066 command_runner.go:130] >       ],
	I1212 00:29:38.249065  525066 command_runner.go:130] >       "size":  "74491780",
	I1212 00:29:38.249070  525066 command_runner.go:130] >       "username":  "nonroot",
	I1212 00:29:38.249073  525066 command_runner.go:130] >       "pinned":  false
	I1212 00:29:38.249080  525066 command_runner.go:130] >     },
	I1212 00:29:38.249083  525066 command_runner.go:130] >     {
	I1212 00:29:38.249093  525066 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1212 00:29:38.249104  525066 command_runner.go:130] >       "repoTags":  [
	I1212 00:29:38.249109  525066 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1212 00:29:38.249112  525066 command_runner.go:130] >       ],
	I1212 00:29:38.249116  525066 command_runner.go:130] >       "repoDigests":  [
	I1212 00:29:38.249125  525066 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1212 00:29:38.249135  525066 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1212 00:29:38.249139  525066 command_runner.go:130] >       ],
	I1212 00:29:38.249142  525066 command_runner.go:130] >       "size":  "60857170",
	I1212 00:29:38.249146  525066 command_runner.go:130] >       "uid":  {
	I1212 00:29:38.249150  525066 command_runner.go:130] >         "value":  "0"
	I1212 00:29:38.249153  525066 command_runner.go:130] >       },
	I1212 00:29:38.249166  525066 command_runner.go:130] >       "username":  "",
	I1212 00:29:38.249173  525066 command_runner.go:130] >       "pinned":  false
	I1212 00:29:38.249177  525066 command_runner.go:130] >     },
	I1212 00:29:38.249179  525066 command_runner.go:130] >     {
	I1212 00:29:38.249186  525066 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1212 00:29:38.249192  525066 command_runner.go:130] >       "repoTags":  [
	I1212 00:29:38.249197  525066 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1212 00:29:38.249201  525066 command_runner.go:130] >       ],
	I1212 00:29:38.249205  525066 command_runner.go:130] >       "repoDigests":  [
	I1212 00:29:38.249215  525066 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1212 00:29:38.249230  525066 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1212 00:29:38.249234  525066 command_runner.go:130] >       ],
	I1212 00:29:38.249241  525066 command_runner.go:130] >       "size":  "84949999",
	I1212 00:29:38.249245  525066 command_runner.go:130] >       "uid":  {
	I1212 00:29:38.249249  525066 command_runner.go:130] >         "value":  "0"
	I1212 00:29:38.249254  525066 command_runner.go:130] >       },
	I1212 00:29:38.249259  525066 command_runner.go:130] >       "username":  "",
	I1212 00:29:38.249263  525066 command_runner.go:130] >       "pinned":  false
	I1212 00:29:38.249268  525066 command_runner.go:130] >     },
	I1212 00:29:38.249272  525066 command_runner.go:130] >     {
	I1212 00:29:38.249281  525066 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1212 00:29:38.249294  525066 command_runner.go:130] >       "repoTags":  [
	I1212 00:29:38.249301  525066 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1212 00:29:38.249304  525066 command_runner.go:130] >       ],
	I1212 00:29:38.249308  525066 command_runner.go:130] >       "repoDigests":  [
	I1212 00:29:38.249317  525066 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1212 00:29:38.249326  525066 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1212 00:29:38.249337  525066 command_runner.go:130] >       ],
	I1212 00:29:38.249341  525066 command_runner.go:130] >       "size":  "72170325",
	I1212 00:29:38.249345  525066 command_runner.go:130] >       "uid":  {
	I1212 00:29:38.249348  525066 command_runner.go:130] >         "value":  "0"
	I1212 00:29:38.249356  525066 command_runner.go:130] >       },
	I1212 00:29:38.249364  525066 command_runner.go:130] >       "username":  "",
	I1212 00:29:38.249367  525066 command_runner.go:130] >       "pinned":  false
	I1212 00:29:38.249371  525066 command_runner.go:130] >     },
	I1212 00:29:38.249374  525066 command_runner.go:130] >     {
	I1212 00:29:38.249381  525066 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1212 00:29:38.249386  525066 command_runner.go:130] >       "repoTags":  [
	I1212 00:29:38.249391  525066 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1212 00:29:38.249394  525066 command_runner.go:130] >       ],
	I1212 00:29:38.249398  525066 command_runner.go:130] >       "repoDigests":  [
	I1212 00:29:38.249409  525066 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1212 00:29:38.249426  525066 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1212 00:29:38.249434  525066 command_runner.go:130] >       ],
	I1212 00:29:38.249438  525066 command_runner.go:130] >       "size":  "74106775",
	I1212 00:29:38.249450  525066 command_runner.go:130] >       "username":  "",
	I1212 00:29:38.249454  525066 command_runner.go:130] >       "pinned":  false
	I1212 00:29:38.249458  525066 command_runner.go:130] >     },
	I1212 00:29:38.249461  525066 command_runner.go:130] >     {
	I1212 00:29:38.249468  525066 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1212 00:29:38.249472  525066 command_runner.go:130] >       "repoTags":  [
	I1212 00:29:38.249481  525066 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1212 00:29:38.249484  525066 command_runner.go:130] >       ],
	I1212 00:29:38.249488  525066 command_runner.go:130] >       "repoDigests":  [
	I1212 00:29:38.249502  525066 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1212 00:29:38.249522  525066 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1212 00:29:38.249528  525066 command_runner.go:130] >       ],
	I1212 00:29:38.249532  525066 command_runner.go:130] >       "size":  "49822549",
	I1212 00:29:38.249535  525066 command_runner.go:130] >       "uid":  {
	I1212 00:29:38.249539  525066 command_runner.go:130] >         "value":  "0"
	I1212 00:29:38.249549  525066 command_runner.go:130] >       },
	I1212 00:29:38.249553  525066 command_runner.go:130] >       "username":  "",
	I1212 00:29:38.249556  525066 command_runner.go:130] >       "pinned":  false
	I1212 00:29:38.249559  525066 command_runner.go:130] >     },
	I1212 00:29:38.249562  525066 command_runner.go:130] >     {
	I1212 00:29:38.249568  525066 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1212 00:29:38.249572  525066 command_runner.go:130] >       "repoTags":  [
	I1212 00:29:38.249576  525066 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1212 00:29:38.249581  525066 command_runner.go:130] >       ],
	I1212 00:29:38.249586  525066 command_runner.go:130] >       "repoDigests":  [
	I1212 00:29:38.249598  525066 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1212 00:29:38.249606  525066 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1212 00:29:38.249613  525066 command_runner.go:130] >       ],
	I1212 00:29:38.249617  525066 command_runner.go:130] >       "size":  "519884",
	I1212 00:29:38.249621  525066 command_runner.go:130] >       "uid":  {
	I1212 00:29:38.249626  525066 command_runner.go:130] >         "value":  "65535"
	I1212 00:29:38.249633  525066 command_runner.go:130] >       },
	I1212 00:29:38.249642  525066 command_runner.go:130] >       "username":  "",
	I1212 00:29:38.249646  525066 command_runner.go:130] >       "pinned":  true
	I1212 00:29:38.249649  525066 command_runner.go:130] >     }
	I1212 00:29:38.249653  525066 command_runner.go:130] >   ]
	I1212 00:29:38.249656  525066 command_runner.go:130] > }
	I1212 00:29:38.252138  525066 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 00:29:38.252165  525066 crio.go:433] Images already preloaded, skipping extraction
	I1212 00:29:38.252226  525066 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:29:38.276626  525066 command_runner.go:130] > {
	I1212 00:29:38.276647  525066 command_runner.go:130] >   "images":  [
	I1212 00:29:38.276651  525066 command_runner.go:130] >     {
	I1212 00:29:38.276660  525066 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1212 00:29:38.276674  525066 command_runner.go:130] >       "repoTags":  [
	I1212 00:29:38.276681  525066 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1212 00:29:38.276684  525066 command_runner.go:130] >       ],
	I1212 00:29:38.276690  525066 command_runner.go:130] >       "repoDigests":  [
	I1212 00:29:38.276700  525066 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1212 00:29:38.276711  525066 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1212 00:29:38.276717  525066 command_runner.go:130] >       ],
	I1212 00:29:38.276721  525066 command_runner.go:130] >       "size":  "111333938",
	I1212 00:29:38.276725  525066 command_runner.go:130] >       "username":  "",
	I1212 00:29:38.276731  525066 command_runner.go:130] >       "pinned":  false
	I1212 00:29:38.276737  525066 command_runner.go:130] >     },
	I1212 00:29:38.276740  525066 command_runner.go:130] >     {
	I1212 00:29:38.276747  525066 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1212 00:29:38.276754  525066 command_runner.go:130] >       "repoTags":  [
	I1212 00:29:38.276760  525066 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1212 00:29:38.276767  525066 command_runner.go:130] >       ],
	I1212 00:29:38.276771  525066 command_runner.go:130] >       "repoDigests":  [
	I1212 00:29:38.276781  525066 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1212 00:29:38.276790  525066 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1212 00:29:38.276794  525066 command_runner.go:130] >       ],
	I1212 00:29:38.276799  525066 command_runner.go:130] >       "size":  "29037500",
	I1212 00:29:38.276807  525066 command_runner.go:130] >       "username":  "",
	I1212 00:29:38.276815  525066 command_runner.go:130] >       "pinned":  false
	I1212 00:29:38.276822  525066 command_runner.go:130] >     },
	I1212 00:29:38.276826  525066 command_runner.go:130] >     {
	I1212 00:29:38.276833  525066 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1212 00:29:38.276839  525066 command_runner.go:130] >       "repoTags":  [
	I1212 00:29:38.276845  525066 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1212 00:29:38.276850  525066 command_runner.go:130] >       ],
	I1212 00:29:38.276854  525066 command_runner.go:130] >       "repoDigests":  [
	I1212 00:29:38.276868  525066 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1212 00:29:38.276876  525066 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1212 00:29:38.276879  525066 command_runner.go:130] >       ],
	I1212 00:29:38.276883  525066 command_runner.go:130] >       "size":  "74491780",
	I1212 00:29:38.276891  525066 command_runner.go:130] >       "username":  "nonroot",
	I1212 00:29:38.276895  525066 command_runner.go:130] >       "pinned":  false
	I1212 00:29:38.276901  525066 command_runner.go:130] >     },
	I1212 00:29:38.276904  525066 command_runner.go:130] >     {
	I1212 00:29:38.276911  525066 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1212 00:29:38.276918  525066 command_runner.go:130] >       "repoTags":  [
	I1212 00:29:38.276922  525066 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1212 00:29:38.276925  525066 command_runner.go:130] >       ],
	I1212 00:29:38.276930  525066 command_runner.go:130] >       "repoDigests":  [
	I1212 00:29:38.276940  525066 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1212 00:29:38.276951  525066 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1212 00:29:38.276954  525066 command_runner.go:130] >       ],
	I1212 00:29:38.276973  525066 command_runner.go:130] >       "size":  "60857170",
	I1212 00:29:38.276977  525066 command_runner.go:130] >       "uid":  {
	I1212 00:29:38.276980  525066 command_runner.go:130] >         "value":  "0"
	I1212 00:29:38.276983  525066 command_runner.go:130] >       },
	I1212 00:29:38.276994  525066 command_runner.go:130] >       "username":  "",
	I1212 00:29:38.277001  525066 command_runner.go:130] >       "pinned":  false
	I1212 00:29:38.277004  525066 command_runner.go:130] >     },
	I1212 00:29:38.277007  525066 command_runner.go:130] >     {
	I1212 00:29:38.277014  525066 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1212 00:29:38.277019  525066 command_runner.go:130] >       "repoTags":  [
	I1212 00:29:38.277032  525066 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1212 00:29:38.277039  525066 command_runner.go:130] >       ],
	I1212 00:29:38.277043  525066 command_runner.go:130] >       "repoDigests":  [
	I1212 00:29:38.277051  525066 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1212 00:29:38.277066  525066 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1212 00:29:38.277070  525066 command_runner.go:130] >       ],
	I1212 00:29:38.277074  525066 command_runner.go:130] >       "size":  "84949999",
	I1212 00:29:38.277078  525066 command_runner.go:130] >       "uid":  {
	I1212 00:29:38.277086  525066 command_runner.go:130] >         "value":  "0"
	I1212 00:29:38.277089  525066 command_runner.go:130] >       },
	I1212 00:29:38.277093  525066 command_runner.go:130] >       "username":  "",
	I1212 00:29:38.277101  525066 command_runner.go:130] >       "pinned":  false
	I1212 00:29:38.277104  525066 command_runner.go:130] >     },
	I1212 00:29:38.277110  525066 command_runner.go:130] >     {
	I1212 00:29:38.277117  525066 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1212 00:29:38.277123  525066 command_runner.go:130] >       "repoTags":  [
	I1212 00:29:38.277129  525066 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1212 00:29:38.277132  525066 command_runner.go:130] >       ],
	I1212 00:29:38.277136  525066 command_runner.go:130] >       "repoDigests":  [
	I1212 00:29:38.277145  525066 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1212 00:29:38.277157  525066 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1212 00:29:38.277160  525066 command_runner.go:130] >       ],
	I1212 00:29:38.277164  525066 command_runner.go:130] >       "size":  "72170325",
	I1212 00:29:38.277167  525066 command_runner.go:130] >       "uid":  {
	I1212 00:29:38.277171  525066 command_runner.go:130] >         "value":  "0"
	I1212 00:29:38.277175  525066 command_runner.go:130] >       },
	I1212 00:29:38.277181  525066 command_runner.go:130] >       "username":  "",
	I1212 00:29:38.277186  525066 command_runner.go:130] >       "pinned":  false
	I1212 00:29:38.277191  525066 command_runner.go:130] >     },
	I1212 00:29:38.277194  525066 command_runner.go:130] >     {
	I1212 00:29:38.277203  525066 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1212 00:29:38.277209  525066 command_runner.go:130] >       "repoTags":  [
	I1212 00:29:38.277215  525066 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1212 00:29:38.277225  525066 command_runner.go:130] >       ],
	I1212 00:29:38.277229  525066 command_runner.go:130] >       "repoDigests":  [
	I1212 00:29:38.277238  525066 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1212 00:29:38.277251  525066 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1212 00:29:38.277255  525066 command_runner.go:130] >       ],
	I1212 00:29:38.277259  525066 command_runner.go:130] >       "size":  "74106775",
	I1212 00:29:38.277263  525066 command_runner.go:130] >       "username":  "",
	I1212 00:29:38.277269  525066 command_runner.go:130] >       "pinned":  false
	I1212 00:29:38.277273  525066 command_runner.go:130] >     },
	I1212 00:29:38.277276  525066 command_runner.go:130] >     {
	I1212 00:29:38.277283  525066 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1212 00:29:38.277289  525066 command_runner.go:130] >       "repoTags":  [
	I1212 00:29:38.277294  525066 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1212 00:29:38.277297  525066 command_runner.go:130] >       ],
	I1212 00:29:38.277301  525066 command_runner.go:130] >       "repoDigests":  [
	I1212 00:29:38.277309  525066 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1212 00:29:38.277326  525066 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1212 00:29:38.277330  525066 command_runner.go:130] >       ],
	I1212 00:29:38.277334  525066 command_runner.go:130] >       "size":  "49822549",
	I1212 00:29:38.277340  525066 command_runner.go:130] >       "uid":  {
	I1212 00:29:38.277344  525066 command_runner.go:130] >         "value":  "0"
	I1212 00:29:38.277347  525066 command_runner.go:130] >       },
	I1212 00:29:38.277351  525066 command_runner.go:130] >       "username":  "",
	I1212 00:29:38.277357  525066 command_runner.go:130] >       "pinned":  false
	I1212 00:29:38.277360  525066 command_runner.go:130] >     },
	I1212 00:29:38.277364  525066 command_runner.go:130] >     {
	I1212 00:29:38.277373  525066 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1212 00:29:38.277377  525066 command_runner.go:130] >       "repoTags":  [
	I1212 00:29:38.277390  525066 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1212 00:29:38.277394  525066 command_runner.go:130] >       ],
	I1212 00:29:38.277397  525066 command_runner.go:130] >       "repoDigests":  [
	I1212 00:29:38.277405  525066 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1212 00:29:38.277416  525066 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1212 00:29:38.277424  525066 command_runner.go:130] >       ],
	I1212 00:29:38.277429  525066 command_runner.go:130] >       "size":  "519884",
	I1212 00:29:38.277432  525066 command_runner.go:130] >       "uid":  {
	I1212 00:29:38.277438  525066 command_runner.go:130] >         "value":  "65535"
	I1212 00:29:38.277442  525066 command_runner.go:130] >       },
	I1212 00:29:38.277447  525066 command_runner.go:130] >       "username":  "",
	I1212 00:29:38.277453  525066 command_runner.go:130] >       "pinned":  true
	I1212 00:29:38.277456  525066 command_runner.go:130] >     }
	I1212 00:29:38.277459  525066 command_runner.go:130] >   ]
	I1212 00:29:38.277464  525066 command_runner.go:130] > }
	I1212 00:29:38.282583  525066 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 00:29:38.282606  525066 cache_images.go:86] Images are preloaded, skipping loading
	I1212 00:29:38.282613  525066 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1212 00:29:38.282744  525066 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-035643 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-035643 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 00:29:38.282831  525066 ssh_runner.go:195] Run: crio config
	I1212 00:29:38.339065  525066 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1212 00:29:38.339140  525066 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1212 00:29:38.339162  525066 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1212 00:29:38.339180  525066 command_runner.go:130] > #
	I1212 00:29:38.339218  525066 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1212 00:29:38.339243  525066 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1212 00:29:38.339261  525066 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1212 00:29:38.339304  525066 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1212 00:29:38.339327  525066 command_runner.go:130] > # reload'.
	I1212 00:29:38.339346  525066 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1212 00:29:38.339379  525066 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1212 00:29:38.339402  525066 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1212 00:29:38.339422  525066 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1212 00:29:38.339436  525066 command_runner.go:130] > [crio]
	I1212 00:29:38.339466  525066 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1212 00:29:38.339488  525066 command_runner.go:130] > # containers images, in this directory.
	I1212 00:29:38.339510  525066 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1212 00:29:38.339541  525066 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1212 00:29:38.339562  525066 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1212 00:29:38.339583  525066 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1212 00:29:38.339600  525066 command_runner.go:130] > # imagestore = ""
	I1212 00:29:38.339629  525066 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1212 00:29:38.339652  525066 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1212 00:29:38.339676  525066 command_runner.go:130] > # storage_driver = "overlay"
	I1212 00:29:38.339707  525066 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1212 00:29:38.339730  525066 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1212 00:29:38.339746  525066 command_runner.go:130] > # storage_option = [
	I1212 00:29:38.339762  525066 command_runner.go:130] > # ]
	I1212 00:29:38.339794  525066 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1212 00:29:38.339818  525066 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1212 00:29:38.339834  525066 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1212 00:29:38.339852  525066 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1212 00:29:38.339890  525066 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1212 00:29:38.339907  525066 command_runner.go:130] > # always happen on a node reboot
	I1212 00:29:38.339923  525066 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1212 00:29:38.339959  525066 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1212 00:29:38.339984  525066 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1212 00:29:38.340001  525066 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1212 00:29:38.340029  525066 command_runner.go:130] > # version_file_persist = ""
	I1212 00:29:38.340052  525066 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1212 00:29:38.340072  525066 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1212 00:29:38.340087  525066 command_runner.go:130] > # internal_wipe = true
	I1212 00:29:38.340117  525066 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1212 00:29:38.340140  525066 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1212 00:29:38.340157  525066 command_runner.go:130] > # internal_repair = true
	I1212 00:29:38.340175  525066 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1212 00:29:38.340208  525066 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1212 00:29:38.340228  525066 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1212 00:29:38.340246  525066 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1212 00:29:38.340277  525066 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1212 00:29:38.340300  525066 command_runner.go:130] > [crio.api]
	I1212 00:29:38.340319  525066 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1212 00:29:38.340336  525066 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1212 00:29:38.340365  525066 command_runner.go:130] > # IP address on which the stream server will listen.
	I1212 00:29:38.340387  525066 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1212 00:29:38.340407  525066 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1212 00:29:38.340447  525066 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1212 00:29:38.340822  525066 command_runner.go:130] > # stream_port = "0"
	I1212 00:29:38.340835  525066 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1212 00:29:38.341007  525066 command_runner.go:130] > # stream_enable_tls = false
	I1212 00:29:38.341018  525066 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1212 00:29:38.341210  525066 command_runner.go:130] > # stream_idle_timeout = ""
	I1212 00:29:38.341221  525066 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1212 00:29:38.341229  525066 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1212 00:29:38.341233  525066 command_runner.go:130] > # stream_tls_cert = ""
	I1212 00:29:38.341239  525066 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1212 00:29:38.341245  525066 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1212 00:29:38.341249  525066 command_runner.go:130] > # stream_tls_key = ""
	I1212 00:29:38.341255  525066 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1212 00:29:38.341261  525066 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1212 00:29:38.341272  525066 command_runner.go:130] > # automatically pick up the changes.
	I1212 00:29:38.341446  525066 command_runner.go:130] > # stream_tls_ca = ""
	I1212 00:29:38.341475  525066 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1212 00:29:38.341751  525066 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1212 00:29:38.341765  525066 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1212 00:29:38.341770  525066 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1212 00:29:38.341777  525066 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1212 00:29:38.341782  525066 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1212 00:29:38.341786  525066 command_runner.go:130] > [crio.runtime]
	I1212 00:29:38.341792  525066 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1212 00:29:38.341798  525066 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1212 00:29:38.341801  525066 command_runner.go:130] > # "nofile=1024:2048"
	I1212 00:29:38.341807  525066 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1212 00:29:38.341811  525066 command_runner.go:130] > # default_ulimits = [
	I1212 00:29:38.341814  525066 command_runner.go:130] > # ]
	I1212 00:29:38.341821  525066 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1212 00:29:38.341824  525066 command_runner.go:130] > # no_pivot = false
	I1212 00:29:38.341830  525066 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1212 00:29:38.341836  525066 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1212 00:29:38.341841  525066 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1212 00:29:38.341847  525066 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1212 00:29:38.341851  525066 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1212 00:29:38.341858  525066 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1212 00:29:38.342059  525066 command_runner.go:130] > # conmon = ""
	I1212 00:29:38.342069  525066 command_runner.go:130] > # Cgroup setting for conmon
	I1212 00:29:38.342077  525066 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1212 00:29:38.342081  525066 command_runner.go:130] > conmon_cgroup = "pod"
	I1212 00:29:38.342087  525066 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1212 00:29:38.342093  525066 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1212 00:29:38.342100  525066 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1212 00:29:38.342293  525066 command_runner.go:130] > # conmon_env = [
	I1212 00:29:38.342301  525066 command_runner.go:130] > # ]
	I1212 00:29:38.342307  525066 command_runner.go:130] > # Additional environment variables to set for all the
	I1212 00:29:38.342312  525066 command_runner.go:130] > # containers. These are overridden if set in the
	I1212 00:29:38.342318  525066 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1212 00:29:38.342321  525066 command_runner.go:130] > # default_env = [
	I1212 00:29:38.342325  525066 command_runner.go:130] > # ]
	I1212 00:29:38.342330  525066 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1212 00:29:38.342338  525066 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1212 00:29:38.342531  525066 command_runner.go:130] > # selinux = false
	I1212 00:29:38.342542  525066 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1212 00:29:38.342551  525066 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1212 00:29:38.342556  525066 command_runner.go:130] > # This option supports live configuration reload.
	I1212 00:29:38.342765  525066 command_runner.go:130] > # seccomp_profile = ""
	I1212 00:29:38.342777  525066 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1212 00:29:38.342783  525066 command_runner.go:130] > # This option supports live configuration reload.
	I1212 00:29:38.342787  525066 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1212 00:29:38.342804  525066 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1212 00:29:38.342810  525066 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1212 00:29:38.342817  525066 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1212 00:29:38.342823  525066 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1212 00:29:38.342828  525066 command_runner.go:130] > # This option supports live configuration reload.
	I1212 00:29:38.342833  525066 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1212 00:29:38.342838  525066 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1212 00:29:38.342842  525066 command_runner.go:130] > # the cgroup blockio controller.
	I1212 00:29:38.343029  525066 command_runner.go:130] > # blockio_config_file = ""
	I1212 00:29:38.343040  525066 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1212 00:29:38.343044  525066 command_runner.go:130] > # blockio parameters.
	I1212 00:29:38.343244  525066 command_runner.go:130] > # blockio_reload = false
	I1212 00:29:38.343255  525066 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1212 00:29:38.343260  525066 command_runner.go:130] > # irqbalance daemon.
	I1212 00:29:38.343265  525066 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1212 00:29:38.343271  525066 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1212 00:29:38.343278  525066 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1212 00:29:38.343285  525066 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1212 00:29:38.343472  525066 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1212 00:29:38.343488  525066 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1212 00:29:38.343494  525066 command_runner.go:130] > # This option supports live configuration reload.
	I1212 00:29:38.343668  525066 command_runner.go:130] > # rdt_config_file = ""
	I1212 00:29:38.343679  525066 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1212 00:29:38.343683  525066 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1212 00:29:38.343690  525066 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1212 00:29:38.343893  525066 command_runner.go:130] > # separate_pull_cgroup = ""
	I1212 00:29:38.343905  525066 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1212 00:29:38.343912  525066 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1212 00:29:38.343920  525066 command_runner.go:130] > # will be added.
	I1212 00:29:38.343925  525066 command_runner.go:130] > # default_capabilities = [
	I1212 00:29:38.344172  525066 command_runner.go:130] > # 	"CHOWN",
	I1212 00:29:38.344180  525066 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1212 00:29:38.344184  525066 command_runner.go:130] > # 	"FSETID",
	I1212 00:29:38.344187  525066 command_runner.go:130] > # 	"FOWNER",
	I1212 00:29:38.344191  525066 command_runner.go:130] > # 	"SETGID",
	I1212 00:29:38.344194  525066 command_runner.go:130] > # 	"SETUID",
	I1212 00:29:38.344217  525066 command_runner.go:130] > # 	"SETPCAP",
	I1212 00:29:38.344397  525066 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1212 00:29:38.344405  525066 command_runner.go:130] > # 	"KILL",
	I1212 00:29:38.344408  525066 command_runner.go:130] > # ]
	I1212 00:29:38.344417  525066 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1212 00:29:38.344424  525066 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1212 00:29:38.344614  525066 command_runner.go:130] > # add_inheritable_capabilities = false
	I1212 00:29:38.344634  525066 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1212 00:29:38.344641  525066 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1212 00:29:38.344645  525066 command_runner.go:130] > default_sysctls = [
	I1212 00:29:38.344818  525066 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1212 00:29:38.344834  525066 command_runner.go:130] > ]
	I1212 00:29:38.344839  525066 command_runner.go:130] > # List of devices on the host that a
	I1212 00:29:38.344846  525066 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1212 00:29:38.344850  525066 command_runner.go:130] > # allowed_devices = [
	I1212 00:29:38.345064  525066 command_runner.go:130] > # 	"/dev/fuse",
	I1212 00:29:38.345072  525066 command_runner.go:130] > # 	"/dev/net/tun",
	I1212 00:29:38.345076  525066 command_runner.go:130] > # ]
	I1212 00:29:38.345089  525066 command_runner.go:130] > # List of additional devices. specified as
	I1212 00:29:38.345098  525066 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1212 00:29:38.345141  525066 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1212 00:29:38.345151  525066 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1212 00:29:38.345155  525066 command_runner.go:130] > # additional_devices = [
	I1212 00:29:38.345354  525066 command_runner.go:130] > # ]
	I1212 00:29:38.345364  525066 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1212 00:29:38.345368  525066 command_runner.go:130] > # cdi_spec_dirs = [
	I1212 00:29:38.345371  525066 command_runner.go:130] > # 	"/etc/cdi",
	I1212 00:29:38.345585  525066 command_runner.go:130] > # 	"/var/run/cdi",
	I1212 00:29:38.345593  525066 command_runner.go:130] > # ]
	I1212 00:29:38.345600  525066 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1212 00:29:38.345606  525066 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1212 00:29:38.345609  525066 command_runner.go:130] > # Defaults to false.
	I1212 00:29:38.345614  525066 command_runner.go:130] > # device_ownership_from_security_context = false
	I1212 00:29:38.345652  525066 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1212 00:29:38.345661  525066 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1212 00:29:38.345665  525066 command_runner.go:130] > # hooks_dir = [
	I1212 00:29:38.345877  525066 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1212 00:29:38.345885  525066 command_runner.go:130] > # ]
	I1212 00:29:38.345892  525066 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1212 00:29:38.345899  525066 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1212 00:29:38.345904  525066 command_runner.go:130] > # its default mounts from the following two files:
	I1212 00:29:38.345907  525066 command_runner.go:130] > #
	I1212 00:29:38.345914  525066 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1212 00:29:38.345957  525066 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1212 00:29:38.345963  525066 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1212 00:29:38.345966  525066 command_runner.go:130] > #
	I1212 00:29:38.345972  525066 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1212 00:29:38.345979  525066 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1212 00:29:38.345986  525066 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1212 00:29:38.345991  525066 command_runner.go:130] > #      only add mounts it finds in this file.
	I1212 00:29:38.346020  525066 command_runner.go:130] > #
	I1212 00:29:38.346210  525066 command_runner.go:130] > # default_mounts_file = ""
	I1212 00:29:38.346221  525066 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1212 00:29:38.346228  525066 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1212 00:29:38.346444  525066 command_runner.go:130] > # pids_limit = -1
	I1212 00:29:38.346456  525066 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1212 00:29:38.346463  525066 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1212 00:29:38.346469  525066 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1212 00:29:38.346478  525066 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1212 00:29:38.346512  525066 command_runner.go:130] > # log_size_max = -1
	I1212 00:29:38.346523  525066 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1212 00:29:38.346724  525066 command_runner.go:130] > # log_to_journald = false
	I1212 00:29:38.346736  525066 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1212 00:29:38.346742  525066 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1212 00:29:38.346747  525066 command_runner.go:130] > # Path to directory for container attach sockets.
	I1212 00:29:38.347111  525066 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1212 00:29:38.347122  525066 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1212 00:29:38.347127  525066 command_runner.go:130] > # bind_mount_prefix = ""
	I1212 00:29:38.347132  525066 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1212 00:29:38.347136  525066 command_runner.go:130] > # read_only = false
	I1212 00:29:38.347142  525066 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1212 00:29:38.347149  525066 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1212 00:29:38.347186  525066 command_runner.go:130] > # live configuration reload.
	I1212 00:29:38.347359  525066 command_runner.go:130] > # log_level = "info"
	I1212 00:29:38.347376  525066 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1212 00:29:38.347381  525066 command_runner.go:130] > # This option supports live configuration reload.
	I1212 00:29:38.347597  525066 command_runner.go:130] > # log_filter = ""
	I1212 00:29:38.347608  525066 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1212 00:29:38.347615  525066 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1212 00:29:38.347619  525066 command_runner.go:130] > # separated by comma.
	I1212 00:29:38.347671  525066 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1212 00:29:38.347679  525066 command_runner.go:130] > # uid_mappings = ""
	I1212 00:29:38.347686  525066 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1212 00:29:38.347692  525066 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1212 00:29:38.347696  525066 command_runner.go:130] > # separated by comma.
	I1212 00:29:38.347704  525066 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1212 00:29:38.347707  525066 command_runner.go:130] > # gid_mappings = ""
	I1212 00:29:38.347714  525066 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1212 00:29:38.347746  525066 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1212 00:29:38.347757  525066 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1212 00:29:38.347765  525066 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1212 00:29:38.347769  525066 command_runner.go:130] > # minimum_mappable_uid = -1
	I1212 00:29:38.347775  525066 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1212 00:29:38.347781  525066 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1212 00:29:38.347787  525066 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1212 00:29:38.347822  525066 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1212 00:29:38.348158  525066 command_runner.go:130] > # minimum_mappable_gid = -1
	I1212 00:29:38.348170  525066 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1212 00:29:38.348176  525066 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1212 00:29:38.348182  525066 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1212 00:29:38.348415  525066 command_runner.go:130] > # ctr_stop_timeout = 30
	I1212 00:29:38.348427  525066 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1212 00:29:38.348433  525066 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1212 00:29:38.348438  525066 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1212 00:29:38.348442  525066 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1212 00:29:38.348641  525066 command_runner.go:130] > # drop_infra_ctr = true
	I1212 00:29:38.348653  525066 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1212 00:29:38.348659  525066 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1212 00:29:38.348666  525066 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1212 00:29:38.348674  525066 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1212 00:29:38.348712  525066 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1212 00:29:38.348725  525066 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1212 00:29:38.348731  525066 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1212 00:29:38.348736  525066 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1212 00:29:38.348935  525066 command_runner.go:130] > # shared_cpuset = ""
	I1212 00:29:38.348946  525066 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1212 00:29:38.348952  525066 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1212 00:29:38.348956  525066 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1212 00:29:38.348964  525066 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1212 00:29:38.349178  525066 command_runner.go:130] > # pinns_path = ""
	I1212 00:29:38.349189  525066 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1212 00:29:38.349195  525066 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1212 00:29:38.349199  525066 command_runner.go:130] > # enable_criu_support = true
	I1212 00:29:38.349214  525066 command_runner.go:130] > # Enable/disable the generation of the container,
	I1212 00:29:38.349253  525066 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1212 00:29:38.349272  525066 command_runner.go:130] > # enable_pod_events = false
	I1212 00:29:38.349291  525066 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1212 00:29:38.349322  525066 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1212 00:29:38.349505  525066 command_runner.go:130] > # default_runtime = "crun"
	I1212 00:29:38.349536  525066 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1212 00:29:38.349573  525066 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1212 00:29:38.349601  525066 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1212 00:29:38.349618  525066 command_runner.go:130] > # creation as a file is not desired either.
	I1212 00:29:38.349653  525066 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1212 00:29:38.349674  525066 command_runner.go:130] > # the hostname is being managed dynamically.
	I1212 00:29:38.349690  525066 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1212 00:29:38.349956  525066 command_runner.go:130] > # ]
	I1212 00:29:38.350003  525066 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1212 00:29:38.350025  525066 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1212 00:29:38.350043  525066 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1212 00:29:38.350074  525066 command_runner.go:130] > # Each entry in the table should follow the format:
	I1212 00:29:38.350093  525066 command_runner.go:130] > #
	I1212 00:29:38.350110  525066 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1212 00:29:38.350127  525066 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1212 00:29:38.350158  525066 command_runner.go:130] > # runtime_type = "oci"
	I1212 00:29:38.350179  525066 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1212 00:29:38.350201  525066 command_runner.go:130] > # inherit_default_runtime = false
	I1212 00:29:38.350218  525066 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1212 00:29:38.350253  525066 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1212 00:29:38.350271  525066 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1212 00:29:38.350287  525066 command_runner.go:130] > # monitor_env = []
	I1212 00:29:38.350317  525066 command_runner.go:130] > # privileged_without_host_devices = false
	I1212 00:29:38.350339  525066 command_runner.go:130] > # allowed_annotations = []
	I1212 00:29:38.350358  525066 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1212 00:29:38.350372  525066 command_runner.go:130] > # no_sync_log = false
	I1212 00:29:38.350402  525066 command_runner.go:130] > # default_annotations = {}
	I1212 00:29:38.350419  525066 command_runner.go:130] > # stream_websockets = false
	I1212 00:29:38.350436  525066 command_runner.go:130] > # seccomp_profile = ""
	I1212 00:29:38.350499  525066 command_runner.go:130] > # Where:
	I1212 00:29:38.350529  525066 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1212 00:29:38.350561  525066 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1212 00:29:38.350588  525066 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1212 00:29:38.350607  525066 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1212 00:29:38.350635  525066 command_runner.go:130] > #   in $PATH.
	I1212 00:29:38.350670  525066 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1212 00:29:38.350713  525066 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1212 00:29:38.350735  525066 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1212 00:29:38.350750  525066 command_runner.go:130] > #   state.
	I1212 00:29:38.350780  525066 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1212 00:29:38.350928  525066 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1212 00:29:38.351028  525066 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1212 00:29:38.351155  525066 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1212 00:29:38.351251  525066 command_runner.go:130] > #   the values from the default runtime on load time.
	I1212 00:29:38.351344  525066 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1212 00:29:38.351530  525066 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1212 00:29:38.351817  525066 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1212 00:29:38.352119  525066 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1212 00:29:38.352319  525066 command_runner.go:130] > #   The currently recognized values are:
	I1212 00:29:38.352557  525066 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1212 00:29:38.352766  525066 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1212 00:29:38.352929  525066 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1212 00:29:38.353036  525066 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1212 00:29:38.353153  525066 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1212 00:29:38.353519  525066 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1212 00:29:38.353569  525066 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1212 00:29:38.353580  525066 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1212 00:29:38.353587  525066 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1212 00:29:38.353593  525066 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1212 00:29:38.353637  525066 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1212 00:29:38.353645  525066 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1212 00:29:38.353652  525066 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1212 00:29:38.353658  525066 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1212 00:29:38.353664  525066 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1212 00:29:38.353679  525066 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1212 00:29:38.353695  525066 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1212 00:29:38.353699  525066 command_runner.go:130] > #   deprecated option "conmon".
	I1212 00:29:38.353706  525066 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1212 00:29:38.353766  525066 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1212 00:29:38.353805  525066 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1212 00:29:38.353814  525066 command_runner.go:130] > #   should be moved to the container's cgroup
	I1212 00:29:38.353822  525066 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1212 00:29:38.353826  525066 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1212 00:29:38.353834  525066 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1212 00:29:38.353838  525066 command_runner.go:130] > #   conmon-rs by using:
	I1212 00:29:38.353893  525066 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1212 00:29:38.353903  525066 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1212 00:29:38.353947  525066 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1212 00:29:38.353958  525066 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1212 00:29:38.353963  525066 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1212 00:29:38.353971  525066 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1212 00:29:38.353979  525066 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1212 00:29:38.353984  525066 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1212 00:29:38.353992  525066 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1212 00:29:38.354039  525066 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1212 00:29:38.354048  525066 command_runner.go:130] > #   when a machine crash happens.
	I1212 00:29:38.354056  525066 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1212 00:29:38.354064  525066 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1212 00:29:38.354100  525066 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1212 00:29:38.354106  525066 command_runner.go:130] > #   seccomp profile for the runtime.
	I1212 00:29:38.354113  525066 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1212 00:29:38.354120  525066 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1212 00:29:38.354123  525066 command_runner.go:130] > #
	I1212 00:29:38.354169  525066 command_runner.go:130] > # Using the seccomp notifier feature:
	I1212 00:29:38.354175  525066 command_runner.go:130] > #
	I1212 00:29:38.354188  525066 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1212 00:29:38.354195  525066 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1212 00:29:38.354198  525066 command_runner.go:130] > #
	I1212 00:29:38.354204  525066 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1212 00:29:38.354210  525066 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1212 00:29:38.354212  525066 command_runner.go:130] > #
	I1212 00:29:38.354258  525066 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1212 00:29:38.354270  525066 command_runner.go:130] > # feature.
	I1212 00:29:38.354273  525066 command_runner.go:130] > #
	I1212 00:29:38.354279  525066 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1212 00:29:38.354286  525066 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1212 00:29:38.354292  525066 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1212 00:29:38.354298  525066 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1212 00:29:38.354350  525066 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1212 00:29:38.354355  525066 command_runner.go:130] > #
	I1212 00:29:38.354362  525066 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1212 00:29:38.354402  525066 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1212 00:29:38.354408  525066 command_runner.go:130] > #
	I1212 00:29:38.354414  525066 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1212 00:29:38.354420  525066 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1212 00:29:38.354423  525066 command_runner.go:130] > #
	I1212 00:29:38.354429  525066 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1212 00:29:38.354471  525066 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1212 00:29:38.354477  525066 command_runner.go:130] > # limitation.
	I1212 00:29:38.354481  525066 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1212 00:29:38.354485  525066 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1212 00:29:38.354492  525066 command_runner.go:130] > runtime_type = ""
	I1212 00:29:38.354498  525066 command_runner.go:130] > runtime_root = "/run/crun"
	I1212 00:29:38.354502  525066 command_runner.go:130] > inherit_default_runtime = false
	I1212 00:29:38.354538  525066 command_runner.go:130] > runtime_config_path = ""
	I1212 00:29:38.354545  525066 command_runner.go:130] > container_min_memory = ""
	I1212 00:29:38.354550  525066 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1212 00:29:38.354554  525066 command_runner.go:130] > monitor_cgroup = "pod"
	I1212 00:29:38.354558  525066 command_runner.go:130] > monitor_exec_cgroup = ""
	I1212 00:29:38.354561  525066 command_runner.go:130] > allowed_annotations = [
	I1212 00:29:38.354565  525066 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1212 00:29:38.354568  525066 command_runner.go:130] > ]
	I1212 00:29:38.354573  525066 command_runner.go:130] > privileged_without_host_devices = false
	I1212 00:29:38.354577  525066 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1212 00:29:38.354588  525066 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1212 00:29:38.354592  525066 command_runner.go:130] > runtime_type = ""
	I1212 00:29:38.354595  525066 command_runner.go:130] > runtime_root = "/run/runc"
	I1212 00:29:38.354647  525066 command_runner.go:130] > inherit_default_runtime = false
	I1212 00:29:38.354654  525066 command_runner.go:130] > runtime_config_path = ""
	I1212 00:29:38.354659  525066 command_runner.go:130] > container_min_memory = ""
	I1212 00:29:38.354663  525066 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1212 00:29:38.354667  525066 command_runner.go:130] > monitor_cgroup = "pod"
	I1212 00:29:38.354671  525066 command_runner.go:130] > monitor_exec_cgroup = ""
	I1212 00:29:38.354675  525066 command_runner.go:130] > privileged_without_host_devices = false
	I1212 00:29:38.354692  525066 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1212 00:29:38.354700  525066 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1212 00:29:38.354706  525066 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1212 00:29:38.354719  525066 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1212 00:29:38.354731  525066 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1212 00:29:38.354778  525066 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1212 00:29:38.354787  525066 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1212 00:29:38.354793  525066 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1212 00:29:38.354803  525066 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1212 00:29:38.354848  525066 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1212 00:29:38.354862  525066 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1212 00:29:38.354870  525066 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1212 00:29:38.354909  525066 command_runner.go:130] > # Example:
	I1212 00:29:38.354916  525066 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1212 00:29:38.354921  525066 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1212 00:29:38.354929  525066 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1212 00:29:38.354970  525066 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1212 00:29:38.354976  525066 command_runner.go:130] > # cpuset = "0-1"
	I1212 00:29:38.354979  525066 command_runner.go:130] > # cpushares = "5"
	I1212 00:29:38.354982  525066 command_runner.go:130] > # cpuquota = "1000"
	I1212 00:29:38.354986  525066 command_runner.go:130] > # cpuperiod = "100000"
	I1212 00:29:38.354989  525066 command_runner.go:130] > # cpulimit = "35"
	I1212 00:29:38.354992  525066 command_runner.go:130] > # Where:
	I1212 00:29:38.355002  525066 command_runner.go:130] > # The workload name is workload-type.
	I1212 00:29:38.355009  525066 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1212 00:29:38.355015  525066 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1212 00:29:38.355066  525066 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1212 00:29:38.355077  525066 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1212 00:29:38.355083  525066 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1212 00:29:38.355088  525066 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1212 00:29:38.355095  525066 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1212 00:29:38.355099  525066 command_runner.go:130] > # Default value is set to true
	I1212 00:29:38.355467  525066 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1212 00:29:38.355620  525066 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1212 00:29:38.355721  525066 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1212 00:29:38.355871  525066 command_runner.go:130] > # Default value is set to 'false'
	I1212 00:29:38.356033  525066 command_runner.go:130] > # disable_hostport_mapping = false
	I1212 00:29:38.356163  525066 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1212 00:29:38.356284  525066 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1212 00:29:38.356367  525066 command_runner.go:130] > # timezone = ""
	I1212 00:29:38.356485  525066 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1212 00:29:38.356560  525066 command_runner.go:130] > #
	I1212 00:29:38.356636  525066 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1212 00:29:38.356830  525066 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1212 00:29:38.356937  525066 command_runner.go:130] > [crio.image]
	I1212 00:29:38.357065  525066 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1212 00:29:38.357172  525066 command_runner.go:130] > # default_transport = "docker://"
	I1212 00:29:38.357258  525066 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1212 00:29:38.357455  525066 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1212 00:29:38.357729  525066 command_runner.go:130] > # global_auth_file = ""
	I1212 00:29:38.357787  525066 command_runner.go:130] > # The image used to instantiate infra containers.
	I1212 00:29:38.357796  525066 command_runner.go:130] > # This option supports live configuration reload.
	I1212 00:29:38.357801  525066 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1212 00:29:38.357809  525066 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1212 00:29:38.357821  525066 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1212 00:29:38.357827  525066 command_runner.go:130] > # This option supports live configuration reload.
	I1212 00:29:38.357837  525066 command_runner.go:130] > # pause_image_auth_file = ""
	I1212 00:29:38.357843  525066 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1212 00:29:38.357850  525066 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1212 00:29:38.358627  525066 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1212 00:29:38.358638  525066 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1212 00:29:38.358643  525066 command_runner.go:130] > # pause_command = "/pause"
	I1212 00:29:38.358649  525066 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1212 00:29:38.358655  525066 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1212 00:29:38.358662  525066 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1212 00:29:38.358668  525066 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1212 00:29:38.358674  525066 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1212 00:29:38.358693  525066 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1212 00:29:38.358700  525066 command_runner.go:130] > # pinned_images = [
	I1212 00:29:38.358703  525066 command_runner.go:130] > # ]
	I1212 00:29:38.358709  525066 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1212 00:29:38.358716  525066 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1212 00:29:38.358723  525066 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1212 00:29:38.358729  525066 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1212 00:29:38.358734  525066 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1212 00:29:38.358740  525066 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1212 00:29:38.358745  525066 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1212 00:29:38.358752  525066 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1212 00:29:38.358758  525066 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1212 00:29:38.358764  525066 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1212 00:29:38.358771  525066 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1212 00:29:38.358776  525066 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1212 00:29:38.358782  525066 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1212 00:29:38.358788  525066 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1212 00:29:38.358791  525066 command_runner.go:130] > # changing them here.
	I1212 00:29:38.358801  525066 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1212 00:29:38.358805  525066 command_runner.go:130] > # insecure_registries = [
	I1212 00:29:38.358808  525066 command_runner.go:130] > # ]
	I1212 00:29:38.358814  525066 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1212 00:29:38.358828  525066 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1212 00:29:38.358833  525066 command_runner.go:130] > # image_volumes = "mkdir"
	I1212 00:29:38.358838  525066 command_runner.go:130] > # Temporary directory to use for storing big files
	I1212 00:29:38.358842  525066 command_runner.go:130] > # big_files_temporary_dir = ""
	I1212 00:29:38.358848  525066 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1212 00:29:38.358855  525066 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1212 00:29:38.358860  525066 command_runner.go:130] > # auto_reload_registries = false
	I1212 00:29:38.358866  525066 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1212 00:29:38.358874  525066 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1212 00:29:38.358881  525066 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1212 00:29:38.358885  525066 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1212 00:29:38.358889  525066 command_runner.go:130] > # The mode of short name resolution.
	I1212 00:29:38.358896  525066 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1212 00:29:38.358903  525066 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1212 00:29:38.358908  525066 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1212 00:29:38.358913  525066 command_runner.go:130] > # short_name_mode = "enforcing"
	I1212 00:29:38.358919  525066 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1212 00:29:38.358925  525066 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1212 00:29:38.358929  525066 command_runner.go:130] > # oci_artifact_mount_support = true
	I1212 00:29:38.358935  525066 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1212 00:29:38.358938  525066 command_runner.go:130] > # CNI plugins.
	I1212 00:29:38.358941  525066 command_runner.go:130] > [crio.network]
	I1212 00:29:38.358947  525066 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1212 00:29:38.358952  525066 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1212 00:29:38.358956  525066 command_runner.go:130] > # cni_default_network = ""
	I1212 00:29:38.358966  525066 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1212 00:29:38.358970  525066 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1212 00:29:38.358975  525066 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1212 00:29:38.358979  525066 command_runner.go:130] > # plugin_dirs = [
	I1212 00:29:38.358982  525066 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1212 00:29:38.358985  525066 command_runner.go:130] > # ]
	I1212 00:29:38.358989  525066 command_runner.go:130] > # List of included pod metrics.
	I1212 00:29:38.358993  525066 command_runner.go:130] > # included_pod_metrics = [
	I1212 00:29:38.359000  525066 command_runner.go:130] > # ]
	I1212 00:29:38.359005  525066 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1212 00:29:38.359010  525066 command_runner.go:130] > [crio.metrics]
	I1212 00:29:38.359017  525066 command_runner.go:130] > # Globally enable or disable metrics support.
	I1212 00:29:38.359024  525066 command_runner.go:130] > # enable_metrics = false
	I1212 00:29:38.359029  525066 command_runner.go:130] > # Specify enabled metrics collectors.
	I1212 00:29:38.359034  525066 command_runner.go:130] > # Per default all metrics are enabled.
	I1212 00:29:38.359040  525066 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1212 00:29:38.359048  525066 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1212 00:29:38.359054  525066 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1212 00:29:38.359068  525066 command_runner.go:130] > # metrics_collectors = [
	I1212 00:29:38.359072  525066 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1212 00:29:38.359076  525066 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1212 00:29:38.359079  525066 command_runner.go:130] > # 	"containers_oom_total",
	I1212 00:29:38.359083  525066 command_runner.go:130] > # 	"processes_defunct",
	I1212 00:29:38.359087  525066 command_runner.go:130] > # 	"operations_total",
	I1212 00:29:38.359091  525066 command_runner.go:130] > # 	"operations_latency_seconds",
	I1212 00:29:38.359095  525066 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1212 00:29:38.359099  525066 command_runner.go:130] > # 	"operations_errors_total",
	I1212 00:29:38.359103  525066 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1212 00:29:38.359107  525066 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1212 00:29:38.359111  525066 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1212 00:29:38.359115  525066 command_runner.go:130] > # 	"image_pulls_success_total",
	I1212 00:29:38.359119  525066 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1212 00:29:38.359123  525066 command_runner.go:130] > # 	"containers_oom_count_total",
	I1212 00:29:38.359128  525066 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1212 00:29:38.359132  525066 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1212 00:29:38.359137  525066 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1212 00:29:38.359139  525066 command_runner.go:130] > # ]
	I1212 00:29:38.359145  525066 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1212 00:29:38.359149  525066 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1212 00:29:38.359155  525066 command_runner.go:130] > # The port on which the metrics server will listen.
	I1212 00:29:38.359158  525066 command_runner.go:130] > # metrics_port = 9090
	I1212 00:29:38.359167  525066 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1212 00:29:38.359171  525066 command_runner.go:130] > # metrics_socket = ""
	I1212 00:29:38.359176  525066 command_runner.go:130] > # The certificate for the secure metrics server.
	I1212 00:29:38.359182  525066 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1212 00:29:38.359188  525066 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1212 00:29:38.359192  525066 command_runner.go:130] > # certificate on any modification event.
	I1212 00:29:38.359196  525066 command_runner.go:130] > # metrics_cert = ""
	I1212 00:29:38.359201  525066 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1212 00:29:38.359206  525066 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1212 00:29:38.359209  525066 command_runner.go:130] > # metrics_key = ""
	I1212 00:29:38.359214  525066 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1212 00:29:38.359218  525066 command_runner.go:130] > [crio.tracing]
	I1212 00:29:38.359224  525066 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1212 00:29:38.359227  525066 command_runner.go:130] > # enable_tracing = false
	I1212 00:29:38.359233  525066 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1212 00:29:38.359237  525066 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1212 00:29:38.359243  525066 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1212 00:29:38.359249  525066 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1212 00:29:38.359253  525066 command_runner.go:130] > # CRI-O NRI configuration.
	I1212 00:29:38.359256  525066 command_runner.go:130] > [crio.nri]
	I1212 00:29:38.359260  525066 command_runner.go:130] > # Globally enable or disable NRI.
	I1212 00:29:38.359458  525066 command_runner.go:130] > # enable_nri = true
	I1212 00:29:38.359492  525066 command_runner.go:130] > # NRI socket to listen on.
	I1212 00:29:38.359531  525066 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1212 00:29:38.359552  525066 command_runner.go:130] > # NRI plugin directory to use.
	I1212 00:29:38.359571  525066 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1212 00:29:38.359603  525066 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1212 00:29:38.359625  525066 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1212 00:29:38.359646  525066 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1212 00:29:38.359766  525066 command_runner.go:130] > # nri_disable_connections = false
	I1212 00:29:38.359799  525066 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1212 00:29:38.359833  525066 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1212 00:29:38.359860  525066 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1212 00:29:38.359876  525066 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1212 00:29:38.359893  525066 command_runner.go:130] > # NRI default validator configuration.
	I1212 00:29:38.359933  525066 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1212 00:29:38.359959  525066 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1212 00:29:38.359990  525066 command_runner.go:130] > # can be restricted/rejected:
	I1212 00:29:38.360015  525066 command_runner.go:130] > # - OCI hook injection
	I1212 00:29:38.360033  525066 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1212 00:29:38.360064  525066 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1212 00:29:38.360089  525066 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1212 00:29:38.360107  525066 command_runner.go:130] > # - adjustment of linux namespaces
	I1212 00:29:38.360127  525066 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1212 00:29:38.360166  525066 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1212 00:29:38.360186  525066 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1212 00:29:38.360201  525066 command_runner.go:130] > #
	I1212 00:29:38.360237  525066 command_runner.go:130] > # [crio.nri.default_validator]
	I1212 00:29:38.360255  525066 command_runner.go:130] > # nri_enable_default_validator = false
	I1212 00:29:38.360272  525066 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1212 00:29:38.360303  525066 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1212 00:29:38.360330  525066 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1212 00:29:38.360348  525066 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1212 00:29:38.360476  525066 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1212 00:29:38.360648  525066 command_runner.go:130] > # nri_validator_required_plugins = [
	I1212 00:29:38.360681  525066 command_runner.go:130] > # ]
	I1212 00:29:38.360704  525066 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1212 00:29:38.360740  525066 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1212 00:29:38.360764  525066 command_runner.go:130] > [crio.stats]
	I1212 00:29:38.360783  525066 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1212 00:29:38.360814  525066 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1212 00:29:38.360847  525066 command_runner.go:130] > # stats_collection_period = 0
	I1212 00:29:38.360867  525066 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1212 00:29:38.360905  525066 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1212 00:29:38.360921  525066 command_runner.go:130] > # collection_period = 0
	I1212 00:29:38.360984  525066 command_runner.go:130] ! time="2025-12-12T00:29:38.313366715Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1212 00:29:38.361015  525066 command_runner.go:130] ! time="2025-12-12T00:29:38.313641917Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1212 00:29:38.361052  525066 command_runner.go:130] ! time="2025-12-12T00:29:38.313871475Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1212 00:29:38.361075  525066 command_runner.go:130] ! time="2025-12-12T00:29:38.314022397Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1212 00:29:38.361124  525066 command_runner.go:130] ! time="2025-12-12T00:29:38.314372427Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:29:38.361154  525066 command_runner.go:130] ! time="2025-12-12T00:29:38.31485409Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1212 00:29:38.361178  525066 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1212 00:29:38.361311  525066 cni.go:84] Creating CNI manager for ""
	I1212 00:29:38.361353  525066 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 00:29:38.361385  525066 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 00:29:38.361436  525066 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-035643 NodeName:functional-035643 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 00:29:38.361629  525066 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-035643"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 00:29:38.361753  525066 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 00:29:38.369085  525066 command_runner.go:130] > kubeadm
	I1212 00:29:38.369101  525066 command_runner.go:130] > kubectl
	I1212 00:29:38.369105  525066 command_runner.go:130] > kubelet
	I1212 00:29:38.369321  525066 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 00:29:38.369385  525066 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 00:29:38.376829  525066 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1212 00:29:38.389638  525066 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1212 00:29:38.402701  525066 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1212 00:29:38.415693  525066 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1212 00:29:38.420581  525066 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1212 00:29:38.420662  525066 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:29:38.566232  525066 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:29:39.219049  525066 certs.go:69] Setting up /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643 for IP: 192.168.49.2
	I1212 00:29:39.219079  525066 certs.go:195] generating shared ca certs ...
	I1212 00:29:39.219096  525066 certs.go:227] acquiring lock for ca certs: {Name:mk856824cf2126fa3d2975ef18e195b6ab1234f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:29:39.219238  525066 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22101-487723/.minikube/ca.key
	I1212 00:29:39.219285  525066 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22101-487723/.minikube/proxy-client-ca.key
	I1212 00:29:39.219292  525066 certs.go:257] generating profile certs ...
	I1212 00:29:39.219491  525066 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/client.key
	I1212 00:29:39.219603  525066 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/apiserver.key.8a9a2493
	I1212 00:29:39.219699  525066 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/proxy-client.key
	I1212 00:29:39.219742  525066 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 00:29:39.219761  525066 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1212 00:29:39.219773  525066 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 00:29:39.219783  525066 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 00:29:39.219798  525066 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1212 00:29:39.219843  525066 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1212 00:29:39.219860  525066 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1212 00:29:39.219871  525066 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1212 00:29:39.219967  525066 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/490954.pem (1338 bytes)
	W1212 00:29:39.220038  525066 certs.go:480] ignoring /home/jenkins/minikube-integration/22101-487723/.minikube/certs/490954_empty.pem, impossibly tiny 0 bytes
	I1212 00:29:39.220049  525066 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca-key.pem (1679 bytes)
	I1212 00:29:39.220117  525066 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca.pem (1078 bytes)
	I1212 00:29:39.220147  525066 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/cert.pem (1123 bytes)
	I1212 00:29:39.220202  525066 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/key.pem (1679 bytes)
	I1212 00:29:39.220256  525066 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-487723/.minikube/files/etc/ssl/certs/4909542.pem (1708 bytes)
	I1212 00:29:39.220332  525066 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/490954.pem -> /usr/share/ca-certificates/490954.pem
	I1212 00:29:39.220378  525066 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/files/etc/ssl/certs/4909542.pem -> /usr/share/ca-certificates/4909542.pem
	I1212 00:29:39.220396  525066 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:29:39.221003  525066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 00:29:39.242927  525066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 00:29:39.262484  525066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 00:29:39.285732  525066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 00:29:39.303346  525066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1212 00:29:39.320786  525066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 00:29:39.338821  525066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 00:29:39.356806  525066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 00:29:39.374381  525066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/certs/490954.pem --> /usr/share/ca-certificates/490954.pem (1338 bytes)
	I1212 00:29:39.392333  525066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/files/etc/ssl/certs/4909542.pem --> /usr/share/ca-certificates/4909542.pem (1708 bytes)
	I1212 00:29:39.410089  525066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 00:29:39.427383  525066 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 00:29:39.439725  525066 ssh_runner.go:195] Run: openssl version
	I1212 00:29:39.445636  525066 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1212 00:29:39.445982  525066 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/490954.pem
	I1212 00:29:39.453236  525066 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/490954.pem /etc/ssl/certs/490954.pem
	I1212 00:29:39.460672  525066 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/490954.pem
	I1212 00:29:39.464184  525066 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 12 00:21 /usr/share/ca-certificates/490954.pem
	I1212 00:29:39.464289  525066 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 00:21 /usr/share/ca-certificates/490954.pem
	I1212 00:29:39.464344  525066 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/490954.pem
	I1212 00:29:39.505960  525066 command_runner.go:130] > 51391683
	I1212 00:29:39.506560  525066 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 00:29:39.514611  525066 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4909542.pem
	I1212 00:29:39.522360  525066 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4909542.pem /etc/ssl/certs/4909542.pem
	I1212 00:29:39.531109  525066 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4909542.pem
	I1212 00:29:39.534913  525066 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 12 00:21 /usr/share/ca-certificates/4909542.pem
	I1212 00:29:39.535312  525066 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 00:21 /usr/share/ca-certificates/4909542.pem
	I1212 00:29:39.535374  525066 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4909542.pem
	I1212 00:29:39.578207  525066 command_runner.go:130] > 3ec20f2e
	I1212 00:29:39.578374  525066 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 00:29:39.586281  525066 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:29:39.593845  525066 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 00:29:39.601415  525066 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:29:39.605435  525066 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 12 00:11 /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:29:39.605483  525066 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 00:11 /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:29:39.605537  525066 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:29:39.646250  525066 command_runner.go:130] > b5213941
	I1212 00:29:39.646757  525066 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 00:29:39.654391  525066 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 00:29:39.658287  525066 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 00:29:39.658314  525066 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1212 00:29:39.658322  525066 command_runner.go:130] > Device: 259,1	Inode: 2360480     Links: 1
	I1212 00:29:39.658330  525066 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 00:29:39.658336  525066 command_runner.go:130] > Access: 2025-12-12 00:25:30.972268820 +0000
	I1212 00:29:39.658341  525066 command_runner.go:130] > Modify: 2025-12-12 00:21:25.329898534 +0000
	I1212 00:29:39.658346  525066 command_runner.go:130] > Change: 2025-12-12 00:21:25.329898534 +0000
	I1212 00:29:39.658351  525066 command_runner.go:130] >  Birth: 2025-12-12 00:21:25.329898534 +0000
	I1212 00:29:39.658416  525066 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 00:29:39.699997  525066 command_runner.go:130] > Certificate will not expire
	I1212 00:29:39.700109  525066 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 00:29:39.748952  525066 command_runner.go:130] > Certificate will not expire
	I1212 00:29:39.749499  525066 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 00:29:39.797710  525066 command_runner.go:130] > Certificate will not expire
	I1212 00:29:39.798154  525066 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 00:29:39.843103  525066 command_runner.go:130] > Certificate will not expire
	I1212 00:29:39.843601  525066 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 00:29:39.887374  525066 command_runner.go:130] > Certificate will not expire
	I1212 00:29:39.887871  525066 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 00:29:39.942362  525066 command_runner.go:130] > Certificate will not expire
	I1212 00:29:39.942946  525066 kubeadm.go:401] StartCluster: {Name:functional-035643 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-035643 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:29:39.943046  525066 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 00:29:39.943208  525066 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 00:29:39.985575  525066 cri.go:89] found id: ""
	I1212 00:29:39.985700  525066 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 00:29:39.993609  525066 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1212 00:29:39.993681  525066 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1212 00:29:39.993702  525066 command_runner.go:130] > /var/lib/minikube/etcd:
	I1212 00:29:39.994895  525066 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1212 00:29:39.994945  525066 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1212 00:29:39.995038  525066 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 00:29:40.006978  525066 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:29:40.007554  525066 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-035643" does not appear in /home/jenkins/minikube-integration/22101-487723/kubeconfig
	I1212 00:29:40.007785  525066 kubeconfig.go:62] /home/jenkins/minikube-integration/22101-487723/kubeconfig needs updating (will repair): [kubeconfig missing "functional-035643" cluster setting kubeconfig missing "functional-035643" context setting]
	I1212 00:29:40.008175  525066 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-487723/kubeconfig: {Name:mk40d877648a1b47389942ad828ec218ac64f642 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:29:40.008787  525066 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22101-487723/kubeconfig
	I1212 00:29:40.009179  525066 kapi.go:59] client config for functional-035643: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/client.crt", KeyFile:"/home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/client.key", CAFile:"/home/jenkins/minikube-integration/22101-487723/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 00:29:40.009975  525066 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1212 00:29:40.010118  525066 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1212 00:29:40.010148  525066 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1212 00:29:40.010168  525066 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1212 00:29:40.010204  525066 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1212 00:29:40.010223  525066 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1212 00:29:40.010646  525066 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 00:29:40.025803  525066 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1212 00:29:40.025893  525066 kubeadm.go:602] duration metric: took 30.929693ms to restartPrimaryControlPlane
	I1212 00:29:40.025918  525066 kubeadm.go:403] duration metric: took 82.978705ms to StartCluster
	I1212 00:29:40.025961  525066 settings.go:142] acquiring lock: {Name:mk274c10b2238dc32d72b68ac2e1ec517b8a72b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:29:40.026057  525066 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22101-487723/kubeconfig
	I1212 00:29:40.026847  525066 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-487723/kubeconfig: {Name:mk40d877648a1b47389942ad828ec218ac64f642 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:29:40.027182  525066 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 00:29:40.027614  525066 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 00:29:40.027718  525066 addons.go:70] Setting storage-provisioner=true in profile "functional-035643"
	I1212 00:29:40.027733  525066 addons.go:239] Setting addon storage-provisioner=true in "functional-035643"
	I1212 00:29:40.027759  525066 host.go:66] Checking if "functional-035643" exists ...
	I1212 00:29:40.027683  525066 config.go:182] Loaded profile config "functional-035643": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 00:29:40.027963  525066 addons.go:70] Setting default-storageclass=true in profile "functional-035643"
	I1212 00:29:40.028014  525066 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-035643"
	I1212 00:29:40.028265  525066 cli_runner.go:164] Run: docker container inspect functional-035643 --format={{.State.Status}}
	I1212 00:29:40.028431  525066 cli_runner.go:164] Run: docker container inspect functional-035643 --format={{.State.Status}}
	I1212 00:29:40.031408  525066 out.go:179] * Verifying Kubernetes components...
	I1212 00:29:40.035144  525066 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:29:40.072983  525066 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22101-487723/kubeconfig
	I1212 00:29:40.073191  525066 kapi.go:59] client config for functional-035643: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/client.crt", KeyFile:"/home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/client.key", CAFile:"/home/jenkins/minikube-integration/22101-487723/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 00:29:40.073564  525066 addons.go:239] Setting addon default-storageclass=true in "functional-035643"
	I1212 00:29:40.073635  525066 host.go:66] Checking if "functional-035643" exists ...
	I1212 00:29:40.074143  525066 cli_runner.go:164] Run: docker container inspect functional-035643 --format={{.State.Status}}
	I1212 00:29:40.079735  525066 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 00:29:40.083203  525066 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:29:40.083224  525066 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 00:29:40.083308  525066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
	I1212 00:29:40.126926  525066 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 00:29:40.126953  525066 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 00:29:40.127024  525066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
	I1212 00:29:40.157562  525066 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/functional-035643/id_rsa Username:docker}
	I1212 00:29:40.176759  525066 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/functional-035643/id_rsa Username:docker}
	I1212 00:29:40.228329  525066 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:29:40.297459  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:29:40.324896  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:29:40.970121  525066 node_ready.go:35] waiting up to 6m0s for node "functional-035643" to be "Ready" ...
	I1212 00:29:40.970322  525066 type.go:168] "Request Body" body=""
	I1212 00:29:40.970407  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:40.970561  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:29:40.970616  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:40.970718  525066 retry.go:31] will retry after 204.18222ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:40.970890  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:29:40.970976  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:40.971113  525066 retry.go:31] will retry after 159.994769ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:40.971100  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:41.131658  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:29:41.175423  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:29:41.193550  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:29:41.193607  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:41.193625  525066 retry.go:31] will retry after 255.861028ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:41.245543  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:29:41.245583  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:41.245622  525066 retry.go:31] will retry after 363.545377ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:41.449762  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:29:41.471214  525066 type.go:168] "Request Body" body=""
	I1212 00:29:41.471319  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:41.471599  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:41.515695  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:29:41.515762  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:41.515785  525066 retry.go:31] will retry after 558.343872ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:41.610204  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:29:41.681946  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:29:41.682005  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:41.682029  525066 retry.go:31] will retry after 553.13192ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:41.971401  525066 type.go:168] "Request Body" body=""
	I1212 00:29:41.971545  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:41.971960  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:42.075338  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:29:42.153789  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:29:42.153831  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:42.153875  525066 retry.go:31] will retry after 562.779161ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:42.238244  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:29:42.309134  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:29:42.309235  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:42.309278  525066 retry.go:31] will retry after 839.848798ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:42.470350  525066 type.go:168] "Request Body" body=""
	I1212 00:29:42.470438  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:42.470717  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:42.717299  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:29:42.779260  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:29:42.779300  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:42.779319  525066 retry.go:31] will retry after 1.384955704s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:42.970802  525066 type.go:168] "Request Body" body=""
	I1212 00:29:42.970878  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:42.971167  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:29:42.971212  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:29:43.149494  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:29:43.213920  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:29:43.218125  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:43.218200  525066 retry.go:31] will retry after 1.154245365s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:43.470517  525066 type.go:168] "Request Body" body=""
	I1212 00:29:43.470604  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:43.470922  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:43.970580  525066 type.go:168] "Request Body" body=""
	I1212 00:29:43.970743  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:43.971073  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:44.165470  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:29:44.225816  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:29:44.225880  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:44.225901  525066 retry.go:31] will retry after 2.063043455s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:44.373318  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:29:44.437999  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:29:44.441831  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:44.441865  525066 retry.go:31] will retry after 1.856604218s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:44.471071  525066 type.go:168] "Request Body" body=""
	I1212 00:29:44.471144  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:44.471437  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:44.971289  525066 type.go:168] "Request Body" body=""
	I1212 00:29:44.971379  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:44.971730  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:29:44.971780  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:29:45.470482  525066 type.go:168] "Request Body" body=""
	I1212 00:29:45.470622  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:45.470959  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:45.970491  525066 type.go:168] "Request Body" body=""
	I1212 00:29:45.970565  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:45.970940  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:46.289221  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:29:46.298644  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:29:46.387298  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:29:46.387341  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:46.387359  525066 retry.go:31] will retry after 2.162137781s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:46.389923  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:29:46.389964  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:46.389984  525066 retry.go:31] will retry after 2.885458194s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:46.471167  525066 type.go:168] "Request Body" body=""
	I1212 00:29:46.471247  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:46.471565  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:46.971278  525066 type.go:168] "Request Body" body=""
	I1212 00:29:46.971393  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:46.971713  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:29:46.971800  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:29:47.471406  525066 type.go:168] "Request Body" body=""
	I1212 00:29:47.471481  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:47.471794  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:47.970503  525066 type.go:168] "Request Body" body=""
	I1212 00:29:47.970590  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:47.970978  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:48.470459  525066 type.go:168] "Request Body" body=""
	I1212 00:29:48.470563  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:48.470882  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:48.550228  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:29:48.609468  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:29:48.609564  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:48.609586  525066 retry.go:31] will retry after 5.142469671s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:48.970999  525066 type.go:168] "Request Body" body=""
	I1212 00:29:48.971081  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:48.971378  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:49.275822  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:29:49.338921  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:29:49.338964  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:49.338982  525066 retry.go:31] will retry after 3.130992497s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:49.471334  525066 type.go:168] "Request Body" body=""
	I1212 00:29:49.471407  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:49.471715  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:29:49.471774  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:29:49.970357  525066 type.go:168] "Request Body" body=""
	I1212 00:29:49.970428  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:49.970800  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:50.470449  525066 type.go:168] "Request Body" body=""
	I1212 00:29:50.470521  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:50.470885  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:50.970632  525066 type.go:168] "Request Body" body=""
	I1212 00:29:50.970736  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:50.971135  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:51.470850  525066 type.go:168] "Request Body" body=""
	I1212 00:29:51.470934  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:51.471301  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:51.971160  525066 type.go:168] "Request Body" body=""
	I1212 00:29:51.971232  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:51.971562  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:29:51.971629  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:29:52.470175  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:29:52.470342  525066 type.go:168] "Request Body" body=""
	I1212 00:29:52.470395  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:52.470704  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:52.525865  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:29:52.529169  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:52.529199  525066 retry.go:31] will retry after 5.202817608s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:52.970512  525066 type.go:168] "Request Body" body=""
	I1212 00:29:52.970577  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:52.970929  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:53.470488  525066 type.go:168] "Request Body" body=""
	I1212 00:29:53.470560  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:53.470915  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:53.752286  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:29:53.818071  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:29:53.818120  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:53.818138  525066 retry.go:31] will retry after 7.493688168s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:53.970432  525066 type.go:168] "Request Body" body=""
	I1212 00:29:53.970529  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:53.970820  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:54.470420  525066 type.go:168] "Request Body" body=""
	I1212 00:29:54.470487  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:54.470795  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:29:54.470851  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:29:54.970811  525066 type.go:168] "Request Body" body=""
	I1212 00:29:54.970890  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:54.971241  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:55.471081  525066 type.go:168] "Request Body" body=""
	I1212 00:29:55.471155  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:55.471463  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:55.971189  525066 type.go:168] "Request Body" body=""
	I1212 00:29:55.971259  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:55.971627  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:56.470367  525066 type.go:168] "Request Body" body=""
	I1212 00:29:56.470446  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:56.470766  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:56.970401  525066 type.go:168] "Request Body" body=""
	I1212 00:29:56.970473  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:56.970813  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:29:56.970885  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:29:57.470373  525066 type.go:168] "Request Body" body=""
	I1212 00:29:57.470446  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:57.470716  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:57.732201  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:29:57.788085  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:29:57.792139  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:57.792170  525066 retry.go:31] will retry after 6.658571386s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:57.970423  525066 type.go:168] "Request Body" body=""
	I1212 00:29:57.970495  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:57.970833  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:58.470545  525066 type.go:168] "Request Body" body=""
	I1212 00:29:58.470620  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:58.470971  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:58.970653  525066 type.go:168] "Request Body" body=""
	I1212 00:29:58.970748  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:58.971004  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:29:58.971063  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:29:59.470479  525066 type.go:168] "Request Body" body=""
	I1212 00:29:59.470553  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:59.470886  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:59.970903  525066 type.go:168] "Request Body" body=""
	I1212 00:29:59.970985  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:59.971299  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:00.470879  525066 type.go:168] "Request Body" body=""
	I1212 00:30:00.470978  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:00.471351  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:00.971213  525066 type.go:168] "Request Body" body=""
	I1212 00:30:00.971307  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:00.971736  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:00.971826  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:01.312112  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:30:01.378306  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:30:01.384542  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:30:01.384581  525066 retry.go:31] will retry after 9.383564416s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:30:01.470976  525066 type.go:168] "Request Body" body=""
	I1212 00:30:01.471119  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:01.471452  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:01.971252  525066 type.go:168] "Request Body" body=""
	I1212 00:30:01.971351  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:01.971665  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:02.470427  525066 type.go:168] "Request Body" body=""
	I1212 00:30:02.470507  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:02.470916  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:02.970616  525066 type.go:168] "Request Body" body=""
	I1212 00:30:02.970721  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:02.971066  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:03.470621  525066 type.go:168] "Request Body" body=""
	I1212 00:30:03.470716  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:03.470992  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:03.471037  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:03.970767  525066 type.go:168] "Request Body" body=""
	I1212 00:30:03.970845  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:03.971214  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:04.450915  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:30:04.471249  525066 type.go:168] "Request Body" body=""
	I1212 00:30:04.471318  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:04.471581  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:04.504992  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:30:04.508551  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:30:04.508584  525066 retry.go:31] will retry after 16.635241248s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:30:04.971271  525066 type.go:168] "Request Body" body=""
	I1212 00:30:04.971364  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:04.971628  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:05.470444  525066 type.go:168] "Request Body" body=""
	I1212 00:30:05.470516  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:05.470888  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:05.970469  525066 type.go:168] "Request Body" body=""
	I1212 00:30:05.970547  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:05.970907  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:05.970959  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:06.470384  525066 type.go:168] "Request Body" body=""
	I1212 00:30:06.470477  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:06.470800  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:06.970490  525066 type.go:168] "Request Body" body=""
	I1212 00:30:06.970569  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:06.970938  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:07.470503  525066 type.go:168] "Request Body" body=""
	I1212 00:30:07.470599  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:07.470930  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:07.970442  525066 type.go:168] "Request Body" body=""
	I1212 00:30:07.970519  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:07.970789  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:08.470442  525066 type.go:168] "Request Body" body=""
	I1212 00:30:08.470518  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:08.470850  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:08.470905  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:08.970456  525066 type.go:168] "Request Body" body=""
	I1212 00:30:08.970529  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:08.970891  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:09.470554  525066 type.go:168] "Request Body" body=""
	I1212 00:30:09.470632  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:09.470900  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:09.970929  525066 type.go:168] "Request Body" body=""
	I1212 00:30:09.971012  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:09.971327  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:10.470376  525066 type.go:168] "Request Body" body=""
	I1212 00:30:10.470457  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:10.470750  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:10.768281  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:30:10.825103  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:30:10.828984  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:30:10.829014  525066 retry.go:31] will retry after 8.149625317s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:30:10.971311  525066 type.go:168] "Request Body" body=""
	I1212 00:30:10.971379  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:10.971644  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:10.971683  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:11.470436  525066 type.go:168] "Request Body" body=""
	I1212 00:30:11.470511  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:11.470843  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:11.970432  525066 type.go:168] "Request Body" body=""
	I1212 00:30:11.970527  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:11.970846  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:12.470525  525066 type.go:168] "Request Body" body=""
	I1212 00:30:12.470603  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:12.470941  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:12.970791  525066 type.go:168] "Request Body" body=""
	I1212 00:30:12.970866  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:12.971205  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:13.470475  525066 type.go:168] "Request Body" body=""
	I1212 00:30:13.470560  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:13.470875  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:13.470931  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:13.970549  525066 type.go:168] "Request Body" body=""
	I1212 00:30:13.970621  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:13.970905  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:14.470469  525066 type.go:168] "Request Body" body=""
	I1212 00:30:14.470545  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:14.470911  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:14.970901  525066 type.go:168] "Request Body" body=""
	I1212 00:30:14.970980  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:14.971358  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:15.471006  525066 type.go:168] "Request Body" body=""
	I1212 00:30:15.471085  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:15.471350  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:15.471390  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:15.971171  525066 type.go:168] "Request Body" body=""
	I1212 00:30:15.971259  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:15.971595  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:16.471255  525066 type.go:168] "Request Body" body=""
	I1212 00:30:16.471330  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:16.471636  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:16.970382  525066 type.go:168] "Request Body" body=""
	I1212 00:30:16.970466  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:16.970768  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:17.470446  525066 type.go:168] "Request Body" body=""
	I1212 00:30:17.470517  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:17.470833  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:17.970436  525066 type.go:168] "Request Body" body=""
	I1212 00:30:17.970514  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:17.970839  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:17.970896  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:18.470516  525066 type.go:168] "Request Body" body=""
	I1212 00:30:18.470594  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:18.470885  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:18.970641  525066 type.go:168] "Request Body" body=""
	I1212 00:30:18.970763  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:18.971104  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:18.979423  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:30:19.044083  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:30:19.044119  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:30:19.044140  525066 retry.go:31] will retry after 30.537522265s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:30:19.470570  525066 type.go:168] "Request Body" body=""
	I1212 00:30:19.470653  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:19.471007  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:19.971048  525066 type.go:168] "Request Body" body=""
	I1212 00:30:19.971122  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:19.971389  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:19.971439  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:20.470412  525066 type.go:168] "Request Body" body=""
	I1212 00:30:20.470491  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:20.470835  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:20.970464  525066 type.go:168] "Request Body" body=""
	I1212 00:30:20.970544  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:20.970890  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:21.144446  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:30:21.207915  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:30:21.207964  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:30:21.207983  525066 retry.go:31] will retry after 20.295589284s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:30:21.471340  525066 type.go:168] "Request Body" body=""
	I1212 00:30:21.471410  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:21.471696  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:21.970430  525066 type.go:168] "Request Body" body=""
	I1212 00:30:21.970500  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:21.970808  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:22.470556  525066 type.go:168] "Request Body" body=""
	I1212 00:30:22.470633  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:22.470953  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:22.471006  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:22.970442  525066 type.go:168] "Request Body" body=""
	I1212 00:30:22.970508  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:22.970782  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:23.470501  525066 type.go:168] "Request Body" body=""
	I1212 00:30:23.470601  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:23.470922  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:23.970478  525066 type.go:168] "Request Body" body=""
	I1212 00:30:23.970553  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:23.970885  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:24.470551  525066 type.go:168] "Request Body" body=""
	I1212 00:30:24.470618  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:24.470920  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:24.971014  525066 type.go:168] "Request Body" body=""
	I1212 00:30:24.971090  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:24.971391  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:24.971444  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:25.471210  525066 type.go:168] "Request Body" body=""
	I1212 00:30:25.471284  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:25.471604  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:25.971349  525066 type.go:168] "Request Body" body=""
	I1212 00:30:25.971417  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:25.971673  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:26.470375  525066 type.go:168] "Request Body" body=""
	I1212 00:30:26.470450  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:26.470752  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:26.970468  525066 type.go:168] "Request Body" body=""
	I1212 00:30:26.970568  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:26.970900  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:27.470551  525066 type.go:168] "Request Body" body=""
	I1212 00:30:27.470632  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:27.470951  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:27.471009  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:27.970461  525066 type.go:168] "Request Body" body=""
	I1212 00:30:27.970535  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:27.970896  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:28.470447  525066 type.go:168] "Request Body" body=""
	I1212 00:30:28.470517  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:28.470841  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:28.970537  525066 type.go:168] "Request Body" body=""
	I1212 00:30:28.970615  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:28.970891  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:29.470450  525066 type.go:168] "Request Body" body=""
	I1212 00:30:29.470530  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:29.470908  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:29.970898  525066 type.go:168] "Request Body" body=""
	I1212 00:30:29.970970  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:29.971305  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:29.971361  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:30.470857  525066 type.go:168] "Request Body" body=""
	I1212 00:30:30.470924  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:30.471192  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:30.971054  525066 type.go:168] "Request Body" body=""
	I1212 00:30:30.971147  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:30.971476  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:31.471280  525066 type.go:168] "Request Body" body=""
	I1212 00:30:31.471354  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:31.471652  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:31.970396  525066 type.go:168] "Request Body" body=""
	I1212 00:30:31.970469  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:31.970748  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:32.470446  525066 type.go:168] "Request Body" body=""
	I1212 00:30:32.470524  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:32.470875  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:32.470929  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:32.970467  525066 type.go:168] "Request Body" body=""
	I1212 00:30:32.970548  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:32.970910  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:33.470548  525066 type.go:168] "Request Body" body=""
	I1212 00:30:33.470621  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:33.470958  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:33.970663  525066 type.go:168] "Request Body" body=""
	I1212 00:30:33.970768  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:33.971120  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:34.470641  525066 type.go:168] "Request Body" body=""
	I1212 00:30:34.470734  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:34.471055  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:34.471106  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:34.971029  525066 type.go:168] "Request Body" body=""
	I1212 00:30:34.971106  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:34.971362  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:35.471168  525066 type.go:168] "Request Body" body=""
	I1212 00:30:35.471237  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:35.471543  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:35.971213  525066 type.go:168] "Request Body" body=""
	I1212 00:30:35.971284  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:35.971613  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:36.471350  525066 type.go:168] "Request Body" body=""
	I1212 00:30:36.471428  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:36.471693  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:36.471739  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:36.970449  525066 type.go:168] "Request Body" body=""
	I1212 00:30:36.970521  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:36.970836  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:37.470446  525066 type.go:168] "Request Body" body=""
	I1212 00:30:37.470518  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:37.470867  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:37.970336  525066 type.go:168] "Request Body" body=""
	I1212 00:30:37.970408  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:37.970717  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:38.470440  525066 type.go:168] "Request Body" body=""
	I1212 00:30:38.470510  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:38.470841  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:38.970461  525066 type.go:168] "Request Body" body=""
	I1212 00:30:38.970541  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:38.970920  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:38.970990  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:39.470649  525066 type.go:168] "Request Body" body=""
	I1212 00:30:39.470739  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:39.471073  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:39.970912  525066 type.go:168] "Request Body" body=""
	I1212 00:30:39.970992  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:39.971332  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:40.471276  525066 type.go:168] "Request Body" body=""
	I1212 00:30:40.471354  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:40.471676  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:40.970403  525066 type.go:168] "Request Body" body=""
	I1212 00:30:40.970481  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:40.970820  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:41.470514  525066 type.go:168] "Request Body" body=""
	I1212 00:30:41.470595  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:41.470937  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:41.471004  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:41.504392  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:30:41.561180  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:30:41.564784  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:30:41.564819  525066 retry.go:31] will retry after 29.925155821s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:30:41.971369  525066 type.go:168] "Request Body" body=""
	I1212 00:30:41.971443  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:41.971817  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:42.470482  525066 type.go:168] "Request Body" body=""
	I1212 00:30:42.470569  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:42.470884  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:42.970674  525066 type.go:168] "Request Body" body=""
	I1212 00:30:42.970766  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:42.971092  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:43.470816  525066 type.go:168] "Request Body" body=""
	I1212 00:30:43.470887  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:43.471196  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:43.471261  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:43.970994  525066 type.go:168] "Request Body" body=""
	I1212 00:30:43.971095  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:43.971420  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:44.471076  525066 type.go:168] "Request Body" body=""
	I1212 00:30:44.471150  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:44.471470  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:44.971260  525066 type.go:168] "Request Body" body=""
	I1212 00:30:44.971332  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:44.971645  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:45.470349  525066 type.go:168] "Request Body" body=""
	I1212 00:30:45.470425  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:45.470820  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:45.970433  525066 type.go:168] "Request Body" body=""
	I1212 00:30:45.970513  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:45.970829  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:45.970886  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:46.470441  525066 type.go:168] "Request Body" body=""
	I1212 00:30:46.470539  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:46.470878  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:46.970390  525066 type.go:168] "Request Body" body=""
	I1212 00:30:46.970456  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:46.970764  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:47.470436  525066 type.go:168] "Request Body" body=""
	I1212 00:30:47.470515  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:47.470849  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:47.970601  525066 type.go:168] "Request Body" body=""
	I1212 00:30:47.970697  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:47.970992  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:47.971047  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:48.470367  525066 type.go:168] "Request Body" body=""
	I1212 00:30:48.470433  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:48.470671  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:48.970405  525066 type.go:168] "Request Body" body=""
	I1212 00:30:48.970490  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:48.970853  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:49.470483  525066 type.go:168] "Request Body" body=""
	I1212 00:30:49.470560  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:49.470858  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:49.582168  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:30:49.635241  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:30:49.638539  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:30:49.638564  525066 retry.go:31] will retry after 36.706436998s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:30:49.971245  525066 type.go:168] "Request Body" body=""
	I1212 00:30:49.971317  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:49.971579  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:49.971624  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:50.470498  525066 type.go:168] "Request Body" body=""
	I1212 00:30:50.470570  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:50.470924  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:50.970508  525066 type.go:168] "Request Body" body=""
	I1212 00:30:50.970583  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:50.970916  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:51.470404  525066 type.go:168] "Request Body" body=""
	I1212 00:30:51.470472  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:51.470752  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:51.970450  525066 type.go:168] "Request Body" body=""
	I1212 00:30:51.970531  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:51.970886  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:52.470591  525066 type.go:168] "Request Body" body=""
	I1212 00:30:52.470671  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:52.470990  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:52.471051  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:52.970388  525066 type.go:168] "Request Body" body=""
	I1212 00:30:52.970466  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:52.970738  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:53.470464  525066 type.go:168] "Request Body" body=""
	I1212 00:30:53.470534  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:53.470877  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:53.970467  525066 type.go:168] "Request Body" body=""
	I1212 00:30:53.970540  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:53.970875  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:54.470561  525066 type.go:168] "Request Body" body=""
	I1212 00:30:54.470625  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:54.470882  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:54.970766  525066 type.go:168] "Request Body" body=""
	I1212 00:30:54.970842  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:54.971159  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:54.971210  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:55.470744  525066 type.go:168] "Request Body" body=""
	I1212 00:30:55.470816  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:55.471136  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:55.970919  525066 type.go:168] "Request Body" body=""
	I1212 00:30:55.970989  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:55.971245  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:56.471021  525066 type.go:168] "Request Body" body=""
	I1212 00:30:56.471102  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:56.471459  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:56.971297  525066 type.go:168] "Request Body" body=""
	I1212 00:30:56.971380  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:56.971721  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:56.971777  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:57.470392  525066 type.go:168] "Request Body" body=""
	I1212 00:30:57.470466  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:57.470735  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:57.970426  525066 type.go:168] "Request Body" body=""
	I1212 00:30:57.970498  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:57.970846  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:58.470550  525066 type.go:168] "Request Body" body=""
	I1212 00:30:58.470627  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:58.470983  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:58.970661  525066 type.go:168] "Request Body" body=""
	I1212 00:30:58.970747  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:58.971040  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:59.470746  525066 type.go:168] "Request Body" body=""
	I1212 00:30:59.470824  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:59.471166  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:59.471220  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:59.970964  525066 type.go:168] "Request Body" body=""
	I1212 00:30:59.971041  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:59.971352  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:00.470404  525066 type.go:168] "Request Body" body=""
	I1212 00:31:00.470477  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:00.470773  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:00.970466  525066 type.go:168] "Request Body" body=""
	I1212 00:31:00.970543  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:00.970928  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:01.470645  525066 type.go:168] "Request Body" body=""
	I1212 00:31:01.470749  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:01.471096  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:01.970787  525066 type.go:168] "Request Body" body=""
	I1212 00:31:01.970856  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:01.971135  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:01.971178  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:02.470982  525066 type.go:168] "Request Body" body=""
	I1212 00:31:02.471077  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:02.471408  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:02.971193  525066 type.go:168] "Request Body" body=""
	I1212 00:31:02.971269  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:02.971592  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:03.471340  525066 type.go:168] "Request Body" body=""
	I1212 00:31:03.471407  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:03.471649  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:03.970329  525066 type.go:168] "Request Body" body=""
	I1212 00:31:03.970409  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:03.970755  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:04.470469  525066 type.go:168] "Request Body" body=""
	I1212 00:31:04.470553  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:04.470917  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:04.470977  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:04.971081  525066 type.go:168] "Request Body" body=""
	I1212 00:31:04.971152  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:04.971443  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:05.471286  525066 type.go:168] "Request Body" body=""
	I1212 00:31:05.471363  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:05.471677  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:05.970382  525066 type.go:168] "Request Body" body=""
	I1212 00:31:05.970464  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:05.970758  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:06.470420  525066 type.go:168] "Request Body" body=""
	I1212 00:31:06.470484  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:06.470788  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:06.970555  525066 type.go:168] "Request Body" body=""
	I1212 00:31:06.970636  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:06.970974  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:06.971042  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:07.470458  525066 type.go:168] "Request Body" body=""
	I1212 00:31:07.470530  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:07.470902  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:07.970448  525066 type.go:168] "Request Body" body=""
	I1212 00:31:07.970518  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:07.970835  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:08.470450  525066 type.go:168] "Request Body" body=""
	I1212 00:31:08.470521  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:08.470867  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:08.970558  525066 type.go:168] "Request Body" body=""
	I1212 00:31:08.970638  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:08.970976  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:09.470659  525066 type.go:168] "Request Body" body=""
	I1212 00:31:09.470748  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:09.471069  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:09.471163  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:09.971114  525066 type.go:168] "Request Body" body=""
	I1212 00:31:09.971187  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:09.971512  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:10.470533  525066 type.go:168] "Request Body" body=""
	I1212 00:31:10.470613  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:10.470969  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:10.970732  525066 type.go:168] "Request Body" body=""
	I1212 00:31:10.970807  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:10.971084  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:11.470443  525066 type.go:168] "Request Body" body=""
	I1212 00:31:11.470518  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:11.470882  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:11.491140  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:31:11.552135  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:31:11.552186  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:31:11.552275  525066 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1212 00:31:11.970638  525066 type.go:168] "Request Body" body=""
	I1212 00:31:11.970757  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:11.971089  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:11.971151  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:12.470532  525066 type.go:168] "Request Body" body=""
	I1212 00:31:12.470609  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:12.470899  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:12.970411  525066 type.go:168] "Request Body" body=""
	I1212 00:31:12.970504  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:12.970815  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:13.470503  525066 type.go:168] "Request Body" body=""
	I1212 00:31:13.470574  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:13.470924  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:13.970619  525066 type.go:168] "Request Body" body=""
	I1212 00:31:13.970706  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:13.970963  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:14.470738  525066 type.go:168] "Request Body" body=""
	I1212 00:31:14.470812  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:14.471133  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:14.471187  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:14.971146  525066 type.go:168] "Request Body" body=""
	I1212 00:31:14.971222  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:14.971541  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:15.471278  525066 type.go:168] "Request Body" body=""
	I1212 00:31:15.471365  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:15.471609  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:15.970340  525066 type.go:168] "Request Body" body=""
	I1212 00:31:15.970431  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:15.970804  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:16.470509  525066 type.go:168] "Request Body" body=""
	I1212 00:31:16.470591  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:16.470920  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:16.970406  525066 type.go:168] "Request Body" body=""
	I1212 00:31:16.970483  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:16.970790  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:16.970848  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:17.470447  525066 type.go:168] "Request Body" body=""
	I1212 00:31:17.470519  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:17.470842  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:17.970550  525066 type.go:168] "Request Body" body=""
	I1212 00:31:17.970627  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:17.970937  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:18.470378  525066 type.go:168] "Request Body" body=""
	I1212 00:31:18.470445  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:18.470742  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:18.970443  525066 type.go:168] "Request Body" body=""
	I1212 00:31:18.970523  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:18.970864  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:18.970923  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:19.470460  525066 type.go:168] "Request Body" body=""
	I1212 00:31:19.470532  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:19.470860  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:19.970827  525066 type.go:168] "Request Body" body=""
	I1212 00:31:19.970900  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:19.971156  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:20.471114  525066 type.go:168] "Request Body" body=""
	I1212 00:31:20.471192  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:20.471496  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:20.971299  525066 type.go:168] "Request Body" body=""
	I1212 00:31:20.971376  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:20.971723  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:20.971777  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:21.471362  525066 type.go:168] "Request Body" body=""
	I1212 00:31:21.471430  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:21.471729  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:21.970437  525066 type.go:168] "Request Body" body=""
	I1212 00:31:21.970510  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:21.970868  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:22.470577  525066 type.go:168] "Request Body" body=""
	I1212 00:31:22.470650  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:22.470985  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:22.970698  525066 type.go:168] "Request Body" body=""
	I1212 00:31:22.970765  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:22.971007  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:23.470436  525066 type.go:168] "Request Body" body=""
	I1212 00:31:23.470511  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:23.470861  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:23.470914  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:23.970561  525066 type.go:168] "Request Body" body=""
	I1212 00:31:23.970643  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:23.970973  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:24.470353  525066 type.go:168] "Request Body" body=""
	I1212 00:31:24.470466  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:24.470739  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:24.970663  525066 type.go:168] "Request Body" body=""
	I1212 00:31:24.970762  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:24.971091  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:25.470468  525066 type.go:168] "Request Body" body=""
	I1212 00:31:25.470542  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:25.470865  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:25.970393  525066 type.go:168] "Request Body" body=""
	I1212 00:31:25.970464  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:25.970807  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:25.970864  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:26.345425  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:31:26.402811  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:31:26.406955  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:31:26.407059  525066 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1212 00:31:26.410095  525066 out.go:179] * Enabled addons: 
	I1212 00:31:26.413891  525066 addons.go:530] duration metric: took 1m46.38627975s for enable addons: enabled=[]
	I1212 00:31:26.471160  525066 type.go:168] "Request Body" body=""
	I1212 00:31:26.471237  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:26.471562  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:26.971357  525066 type.go:168] "Request Body" body=""
	I1212 00:31:26.971432  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:26.971737  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:27.470432  525066 type.go:168] "Request Body" body=""
	I1212 00:31:27.470500  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:27.470799  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:27.970424  525066 type.go:168] "Request Body" body=""
	I1212 00:31:27.970502  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:27.970862  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:27.970917  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:28.470589  525066 type.go:168] "Request Body" body=""
	I1212 00:31:28.470667  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:28.470983  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:28.970654  525066 type.go:168] "Request Body" body=""
	I1212 00:31:28.970741  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:28.970990  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:29.470447  525066 type.go:168] "Request Body" body=""
	I1212 00:31:29.470518  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:29.470827  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:29.970750  525066 type.go:168] "Request Body" body=""
	I1212 00:31:29.970827  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:29.971160  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:29.971218  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:30.471043  525066 type.go:168] "Request Body" body=""
	I1212 00:31:30.471109  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:30.471376  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:30.971171  525066 type.go:168] "Request Body" body=""
	I1212 00:31:30.971241  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:30.971550  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:31.471358  525066 type.go:168] "Request Body" body=""
	I1212 00:31:31.471445  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:31.471839  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:31.970419  525066 type.go:168] "Request Body" body=""
	I1212 00:31:31.970484  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:31.970752  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:32.470476  525066 type.go:168] "Request Body" body=""
	I1212 00:31:32.470545  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:32.470896  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:32.470960  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:32.970646  525066 type.go:168] "Request Body" body=""
	I1212 00:31:32.970737  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:32.971068  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:33.470394  525066 type.go:168] "Request Body" body=""
	I1212 00:31:33.470464  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:33.470730  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:33.970441  525066 type.go:168] "Request Body" body=""
	I1212 00:31:33.970512  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:33.970887  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:34.470452  525066 type.go:168] "Request Body" body=""
	I1212 00:31:34.470528  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:34.471050  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:34.471101  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:34.971076  525066 type.go:168] "Request Body" body=""
	I1212 00:31:34.971150  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:34.971412  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:35.471345  525066 type.go:168] "Request Body" body=""
	I1212 00:31:35.471417  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:35.471701  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:35.970416  525066 type.go:168] "Request Body" body=""
	I1212 00:31:35.970491  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:35.970794  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:36.470413  525066 type.go:168] "Request Body" body=""
	I1212 00:31:36.470483  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:36.470801  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:36.970488  525066 type.go:168] "Request Body" body=""
	I1212 00:31:36.970578  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:36.970944  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:36.970998  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:37.470498  525066 type.go:168] "Request Body" body=""
	I1212 00:31:37.470572  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:37.470867  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:37.970414  525066 type.go:168] "Request Body" body=""
	I1212 00:31:37.970484  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:37.970840  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:38.470560  525066 type.go:168] "Request Body" body=""
	I1212 00:31:38.470640  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:38.470997  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:38.970458  525066 type.go:168] "Request Body" body=""
	I1212 00:31:38.970542  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:38.970875  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:39.470418  525066 type.go:168] "Request Body" body=""
	I1212 00:31:39.470483  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:39.470746  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:39.470792  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:39.970766  525066 type.go:168] "Request Body" body=""
	I1212 00:31:39.970840  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:39.971186  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:40.470676  525066 type.go:168] "Request Body" body=""
	I1212 00:31:40.470773  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:40.471187  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:40.970410  525066 type.go:168] "Request Body" body=""
	I1212 00:31:40.970498  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:40.970933  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:41.470798  525066 type.go:168] "Request Body" body=""
	I1212 00:31:41.470881  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:41.471270  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:41.471324  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:41.971113  525066 type.go:168] "Request Body" body=""
	I1212 00:31:41.971189  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:41.971540  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:42.471293  525066 type.go:168] "Request Body" body=""
	I1212 00:31:42.471364  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:42.471623  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:42.971377  525066 type.go:168] "Request Body" body=""
	I1212 00:31:42.971447  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:42.971777  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:43.470484  525066 type.go:168] "Request Body" body=""
	I1212 00:31:43.470559  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:43.470898  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:43.970565  525066 type.go:168] "Request Body" body=""
	I1212 00:31:43.970635  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:43.970946  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:43.970995  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:44.470714  525066 type.go:168] "Request Body" body=""
	I1212 00:31:44.470786  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:44.471067  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:44.970961  525066 type.go:168] "Request Body" body=""
	I1212 00:31:44.971037  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:44.971349  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:45.471086  525066 type.go:168] "Request Body" body=""
	I1212 00:31:45.471160  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:45.471425  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:45.971255  525066 type.go:168] "Request Body" body=""
	I1212 00:31:45.971369  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:45.971732  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:45.971788  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:46.470449  525066 type.go:168] "Request Body" body=""
	I1212 00:31:46.470522  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:46.470852  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:46.970385  525066 type.go:168] "Request Body" body=""
	I1212 00:31:46.970452  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:46.970722  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:47.470437  525066 type.go:168] "Request Body" body=""
	I1212 00:31:47.470516  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:47.471054  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:47.970787  525066 type.go:168] "Request Body" body=""
	I1212 00:31:47.970859  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:47.971237  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:48.470392  525066 type.go:168] "Request Body" body=""
	I1212 00:31:48.470462  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:48.470738  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:48.470782  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:48.970447  525066 type.go:168] "Request Body" body=""
	I1212 00:31:48.970518  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:48.970858  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:49.470546  525066 type.go:168] "Request Body" body=""
	I1212 00:31:49.470624  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:49.470988  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:49.970873  525066 type.go:168] "Request Body" body=""
	I1212 00:31:49.970945  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:49.971253  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:50.471037  525066 type.go:168] "Request Body" body=""
	I1212 00:31:50.471108  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:50.471396  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:50.471445  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:50.971208  525066 type.go:168] "Request Body" body=""
	I1212 00:31:50.971282  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:50.971603  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:51.471213  525066 type.go:168] "Request Body" body=""
	I1212 00:31:51.471279  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:51.471540  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:51.971366  525066 type.go:168] "Request Body" body=""
	I1212 00:31:51.971439  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:51.971745  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:52.470473  525066 type.go:168] "Request Body" body=""
	I1212 00:31:52.470565  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:52.470989  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:52.970405  525066 type.go:168] "Request Body" body=""
	I1212 00:31:52.970478  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:52.970761  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:52.970811  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:53.470446  525066 type.go:168] "Request Body" body=""
	I1212 00:31:53.470521  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:53.470872  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:53.970497  525066 type.go:168] "Request Body" body=""
	I1212 00:31:53.970576  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:53.970925  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:54.470437  525066 type.go:168] "Request Body" body=""
	I1212 00:31:54.470505  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:54.470810  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:54.970825  525066 type.go:168] "Request Body" body=""
	I1212 00:31:54.970901  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:54.971247  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:54.971305  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:55.471027  525066 type.go:168] "Request Body" body=""
	I1212 00:31:55.471109  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:55.471438  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:55.971085  525066 type.go:168] "Request Body" body=""
	I1212 00:31:55.971149  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:55.971395  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:56.471224  525066 type.go:168] "Request Body" body=""
	I1212 00:31:56.471307  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:56.471633  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:56.970393  525066 type.go:168] "Request Body" body=""
	I1212 00:31:56.970474  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:56.970791  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:57.470420  525066 type.go:168] "Request Body" body=""
	I1212 00:31:57.470487  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:57.470757  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:57.470801  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:57.970446  525066 type.go:168] "Request Body" body=""
	I1212 00:31:57.970526  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:57.970833  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:58.470459  525066 type.go:168] "Request Body" body=""
	I1212 00:31:58.470532  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:58.470885  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:58.970572  525066 type.go:168] "Request Body" body=""
	I1212 00:31:58.970646  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:58.970920  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:59.470458  525066 type.go:168] "Request Body" body=""
	I1212 00:31:59.470548  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:59.470889  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:59.470948  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:59.970930  525066 type.go:168] "Request Body" body=""
	I1212 00:31:59.971026  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:59.971389  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:00.470941  525066 type.go:168] "Request Body" body=""
	I1212 00:32:00.471069  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:00.471359  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:00.971229  525066 type.go:168] "Request Body" body=""
	I1212 00:32:00.971307  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:00.971647  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:01.470368  525066 type.go:168] "Request Body" body=""
	I1212 00:32:01.470445  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:01.470795  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:01.970448  525066 type.go:168] "Request Body" body=""
	I1212 00:32:01.970521  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:01.970816  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:01.970862  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:02.470549  525066 type.go:168] "Request Body" body=""
	I1212 00:32:02.470624  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:02.470998  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:02.970773  525066 type.go:168] "Request Body" body=""
	I1212 00:32:02.970858  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:02.971312  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:03.471100  525066 type.go:168] "Request Body" body=""
	I1212 00:32:03.471172  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:03.471473  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:03.971278  525066 type.go:168] "Request Body" body=""
	I1212 00:32:03.971356  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:03.971686  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:03.971737  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:04.470407  525066 type.go:168] "Request Body" body=""
	I1212 00:32:04.470491  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:04.470843  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:04.970701  525066 type.go:168] "Request Body" body=""
	I1212 00:32:04.970771  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:04.971049  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:05.470747  525066 type.go:168] "Request Body" body=""
	I1212 00:32:05.470824  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:05.471189  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:05.970759  525066 type.go:168] "Request Body" body=""
	I1212 00:32:05.970838  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:05.971177  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:06.470915  525066 type.go:168] "Request Body" body=""
	I1212 00:32:06.470997  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:06.471253  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:06.471294  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:06.971057  525066 type.go:168] "Request Body" body=""
	I1212 00:32:06.971134  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:06.971488  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:07.471269  525066 type.go:168] "Request Body" body=""
	I1212 00:32:07.471344  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:07.471670  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:07.970352  525066 type.go:168] "Request Body" body=""
	I1212 00:32:07.970421  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:07.970747  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:08.470438  525066 type.go:168] "Request Body" body=""
	I1212 00:32:08.470509  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:08.470878  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:08.970449  525066 type.go:168] "Request Body" body=""
	I1212 00:32:08.970521  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:08.970871  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:08.970925  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:09.470391  525066 type.go:168] "Request Body" body=""
	I1212 00:32:09.470470  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:09.470756  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:09.970703  525066 type.go:168] "Request Body" body=""
	I1212 00:32:09.970779  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:09.971116  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:10.471033  525066 type.go:168] "Request Body" body=""
	I1212 00:32:10.471109  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:10.471417  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:10.971169  525066 type.go:168] "Request Body" body=""
	I1212 00:32:10.971238  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:10.971496  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:10.971539  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:11.471372  525066 type.go:168] "Request Body" body=""
	I1212 00:32:11.471451  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:11.471770  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:11.970469  525066 type.go:168] "Request Body" body=""
	I1212 00:32:11.970586  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:11.970898  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:12.470383  525066 type.go:168] "Request Body" body=""
	I1212 00:32:12.470453  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:12.470788  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:12.970473  525066 type.go:168] "Request Body" body=""
	I1212 00:32:12.970545  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:12.970889  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:13.470464  525066 type.go:168] "Request Body" body=""
	I1212 00:32:13.470555  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:13.470934  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:13.470994  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:13.970655  525066 type.go:168] "Request Body" body=""
	I1212 00:32:13.970754  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:13.971092  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:14.470457  525066 type.go:168] "Request Body" body=""
	I1212 00:32:14.470538  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:14.470903  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:14.970794  525066 type.go:168] "Request Body" body=""
	I1212 00:32:14.970878  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:14.971205  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:15.470971  525066 type.go:168] "Request Body" body=""
	I1212 00:32:15.471055  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:15.471372  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:15.471414  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:15.971237  525066 type.go:168] "Request Body" body=""
	I1212 00:32:15.971320  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:15.971640  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:16.470370  525066 type.go:168] "Request Body" body=""
	I1212 00:32:16.470445  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:16.470782  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:16.970410  525066 type.go:168] "Request Body" body=""
	I1212 00:32:16.970484  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:16.970823  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:17.470439  525066 type.go:168] "Request Body" body=""
	I1212 00:32:17.470517  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:17.470864  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:17.970590  525066 type.go:168] "Request Body" body=""
	I1212 00:32:17.970664  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:17.971024  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:17.971078  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:18.470738  525066 type.go:168] "Request Body" body=""
	I1212 00:32:18.470805  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:18.471184  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:18.971020  525066 type.go:168] "Request Body" body=""
	I1212 00:32:18.971105  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:18.971458  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:19.471116  525066 type.go:168] "Request Body" body=""
	I1212 00:32:19.471188  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:19.471515  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:19.970337  525066 type.go:168] "Request Body" body=""
	I1212 00:32:19.970412  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:19.970828  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:20.471213  525066 type.go:168] "Request Body" body=""
	I1212 00:32:20.471293  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:20.471629  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:20.471692  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:20.970398  525066 type.go:168] "Request Body" body=""
	I1212 00:32:20.970472  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:20.970846  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:21.470529  525066 type.go:168] "Request Body" body=""
	I1212 00:32:21.470596  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:21.470869  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:21.970458  525066 type.go:168] "Request Body" body=""
	I1212 00:32:21.970531  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:21.970901  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:22.470489  525066 type.go:168] "Request Body" body=""
	I1212 00:32:22.470578  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:22.470923  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:22.970628  525066 type.go:168] "Request Body" body=""
	I1212 00:32:22.970719  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:22.970989  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:22.971032  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:23.470448  525066 type.go:168] "Request Body" body=""
	I1212 00:32:23.470535  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:23.470894  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:23.970458  525066 type.go:168] "Request Body" body=""
	I1212 00:32:23.970535  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:23.970896  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:24.470332  525066 type.go:168] "Request Body" body=""
	I1212 00:32:24.470407  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:24.470716  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:24.970747  525066 type.go:168] "Request Body" body=""
	I1212 00:32:24.970820  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:24.971165  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:24.971223  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:25.471031  525066 type.go:168] "Request Body" body=""
	I1212 00:32:25.471128  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:25.471490  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:25.971208  525066 type.go:168] "Request Body" body=""
	I1212 00:32:25.971275  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:25.971541  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:26.471290  525066 type.go:168] "Request Body" body=""
	I1212 00:32:26.471368  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:26.471700  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:26.970438  525066 type.go:168] "Request Body" body=""
	I1212 00:32:26.970517  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:26.970866  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:27.470424  525066 type.go:168] "Request Body" body=""
	I1212 00:32:27.470499  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:27.470866  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:27.470914  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:27.970455  525066 type.go:168] "Request Body" body=""
	I1212 00:32:27.970540  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:27.970913  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:28.470649  525066 type.go:168] "Request Body" body=""
	I1212 00:32:28.470738  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:28.471036  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:28.970429  525066 type.go:168] "Request Body" body=""
	I1212 00:32:28.970500  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:28.970820  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:29.470446  525066 type.go:168] "Request Body" body=""
	I1212 00:32:29.470523  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:29.470857  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:29.970808  525066 type.go:168] "Request Body" body=""
	I1212 00:32:29.970884  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:29.971262  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:29.971318  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:30.471202  525066 type.go:168] "Request Body" body=""
	I1212 00:32:30.471270  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:30.471570  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:30.971249  525066 type.go:168] "Request Body" body=""
	I1212 00:32:30.971333  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:30.971675  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:31.470379  525066 type.go:168] "Request Body" body=""
	I1212 00:32:31.470456  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:31.470795  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:31.970429  525066 type.go:168] "Request Body" body=""
	I1212 00:32:31.970500  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:31.970830  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:32.470463  525066 type.go:168] "Request Body" body=""
	I1212 00:32:32.470546  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:32.470916  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:32.470969  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:32.970507  525066 type.go:168] "Request Body" body=""
	I1212 00:32:32.970586  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:32.970962  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:33.470645  525066 type.go:168] "Request Body" body=""
	I1212 00:32:33.470725  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:33.471027  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:33.970436  525066 type.go:168] "Request Body" body=""
	I1212 00:32:33.970515  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:33.970871  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:34.470554  525066 type.go:168] "Request Body" body=""
	I1212 00:32:34.470623  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:34.470962  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:34.471031  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:34.970790  525066 type.go:168] "Request Body" body=""
	I1212 00:32:34.970868  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:34.971195  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:35.470980  525066 type.go:168] "Request Body" body=""
	I1212 00:32:35.471052  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:35.471397  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:35.971176  525066 type.go:168] "Request Body" body=""
	I1212 00:32:35.971249  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:35.971579  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:36.471345  525066 type.go:168] "Request Body" body=""
	I1212 00:32:36.471420  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:36.471668  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:36.471709  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:36.970372  525066 type.go:168] "Request Body" body=""
	I1212 00:32:36.970448  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:36.970769  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:37.470433  525066 type.go:168] "Request Body" body=""
	I1212 00:32:37.470509  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:37.470827  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:37.970393  525066 type.go:168] "Request Body" body=""
	I1212 00:32:37.970466  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:37.970813  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:38.470425  525066 type.go:168] "Request Body" body=""
	I1212 00:32:38.470496  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:38.470842  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:38.970540  525066 type.go:168] "Request Body" body=""
	I1212 00:32:38.970620  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:38.970960  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:38.971034  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:39.470693  525066 type.go:168] "Request Body" body=""
	I1212 00:32:39.470760  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:39.471016  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:39.970946  525066 type.go:168] "Request Body" body=""
	I1212 00:32:39.971029  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:39.971356  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:40.470492  525066 type.go:168] "Request Body" body=""
	I1212 00:32:40.470569  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:40.470930  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:40.970610  525066 type.go:168] "Request Body" body=""
	I1212 00:32:40.970675  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:40.970950  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:41.470442  525066 type.go:168] "Request Body" body=""
	I1212 00:32:41.470517  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:41.470854  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:41.470914  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:41.970457  525066 type.go:168] "Request Body" body=""
	I1212 00:32:41.970529  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:41.970877  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:42.470405  525066 type.go:168] "Request Body" body=""
	I1212 00:32:42.470478  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:42.470764  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:42.970423  525066 type.go:168] "Request Body" body=""
	I1212 00:32:42.970491  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:42.970824  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:43.470427  525066 type.go:168] "Request Body" body=""
	I1212 00:32:43.470499  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:43.470854  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:43.970397  525066 type.go:168] "Request Body" body=""
	I1212 00:32:43.970461  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:43.970746  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:43.970790  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:44.470423  525066 type.go:168] "Request Body" body=""
	I1212 00:32:44.470501  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:44.470842  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:44.970773  525066 type.go:168] "Request Body" body=""
	I1212 00:32:44.970846  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:44.971174  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:45.470779  525066 type.go:168] "Request Body" body=""
	I1212 00:32:45.470852  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:45.471113  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:45.970429  525066 type.go:168] "Request Body" body=""
	I1212 00:32:45.970512  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:45.970857  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:45.970913  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:46.470599  525066 type.go:168] "Request Body" body=""
	I1212 00:32:46.470698  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:46.471036  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:46.970724  525066 type.go:168] "Request Body" body=""
	I1212 00:32:46.970811  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:46.971101  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:47.470415  525066 type.go:168] "Request Body" body=""
	I1212 00:32:47.470487  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:47.470829  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:47.970458  525066 type.go:168] "Request Body" body=""
	I1212 00:32:47.970529  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:47.970877  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:47.970934  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:48.470399  525066 type.go:168] "Request Body" body=""
	I1212 00:32:48.470468  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:48.470756  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:48.970465  525066 type.go:168] "Request Body" body=""
	I1212 00:32:48.970543  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:48.970922  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:49.470651  525066 type.go:168] "Request Body" body=""
	I1212 00:32:49.470742  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:49.471098  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:49.970876  525066 type.go:168] "Request Body" body=""
	I1212 00:32:49.970959  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:49.971229  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:49.971270  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:50.471244  525066 type.go:168] "Request Body" body=""
	I1212 00:32:50.471322  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:50.471670  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:50.970390  525066 type.go:168] "Request Body" body=""
	I1212 00:32:50.970471  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:50.970817  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:51.470505  525066 type.go:168] "Request Body" body=""
	I1212 00:32:51.470577  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:51.470954  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:51.970430  525066 type.go:168] "Request Body" body=""
	I1212 00:32:51.970500  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:51.970854  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:52.470564  525066 type.go:168] "Request Body" body=""
	I1212 00:32:52.470637  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:52.471003  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:52.471056  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:52.970380  525066 type.go:168] "Request Body" body=""
	I1212 00:32:52.970448  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:52.970779  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:53.470481  525066 type.go:168] "Request Body" body=""
	I1212 00:32:53.470554  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:53.470926  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:53.970647  525066 type.go:168] "Request Body" body=""
	I1212 00:32:53.970737  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:53.971090  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:54.471319  525066 type.go:168] "Request Body" body=""
	I1212 00:32:54.471392  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:54.471642  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:54.471682  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:54.970626  525066 type.go:168] "Request Body" body=""
	I1212 00:32:54.970705  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:54.971020  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:55.470451  525066 type.go:168] "Request Body" body=""
	I1212 00:32:55.470522  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:55.470854  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:55.970384  525066 type.go:168] "Request Body" body=""
	I1212 00:32:55.970461  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:55.970755  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:56.470437  525066 type.go:168] "Request Body" body=""
	I1212 00:32:56.470521  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:56.470867  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:56.970577  525066 type.go:168] "Request Body" body=""
	I1212 00:32:56.970650  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:56.971023  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:56.971077  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:57.470742  525066 type.go:168] "Request Body" body=""
	I1212 00:32:57.470815  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:57.471167  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:57.970872  525066 type.go:168] "Request Body" body=""
	I1212 00:32:57.970953  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:57.971280  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:58.471062  525066 type.go:168] "Request Body" body=""
	I1212 00:32:58.471134  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:58.471462  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:58.971255  525066 type.go:168] "Request Body" body=""
	I1212 00:32:58.971322  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:58.971578  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:58.971620  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:59.471341  525066 type.go:168] "Request Body" body=""
	I1212 00:32:59.471410  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:59.471735  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:59.970614  525066 type.go:168] "Request Body" body=""
	I1212 00:32:59.970716  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:59.971048  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:00.470331  525066 type.go:168] "Request Body" body=""
	I1212 00:33:00.470413  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:00.470671  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:00.970395  525066 type.go:168] "Request Body" body=""
	I1212 00:33:00.970485  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:00.970885  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:01.470464  525066 type.go:168] "Request Body" body=""
	I1212 00:33:01.470537  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:01.470879  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:01.470943  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:01.970438  525066 type.go:168] "Request Body" body=""
	I1212 00:33:01.970506  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:01.970852  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:02.470619  525066 type.go:168] "Request Body" body=""
	I1212 00:33:02.470714  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:02.471075  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:02.970791  525066 type.go:168] "Request Body" body=""
	I1212 00:33:02.970863  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:02.971208  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:03.470951  525066 type.go:168] "Request Body" body=""
	I1212 00:33:03.471027  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:03.471358  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:03.471426  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:03.971137  525066 type.go:168] "Request Body" body=""
	I1212 00:33:03.971208  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:03.971542  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:04.471345  525066 type.go:168] "Request Body" body=""
	I1212 00:33:04.471415  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:04.471746  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:04.970402  525066 type.go:168] "Request Body" body=""
	I1212 00:33:04.970479  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:04.970766  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:05.470439  525066 type.go:168] "Request Body" body=""
	I1212 00:33:05.470511  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:05.470849  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:05.970564  525066 type.go:168] "Request Body" body=""
	I1212 00:33:05.970637  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:05.970984  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:05.971040  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:06.470424  525066 type.go:168] "Request Body" body=""
	I1212 00:33:06.470502  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:06.470781  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:06.970465  525066 type.go:168] "Request Body" body=""
	I1212 00:33:06.970547  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:06.970898  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:07.470461  525066 type.go:168] "Request Body" body=""
	I1212 00:33:07.470546  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:07.470897  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:07.970572  525066 type.go:168] "Request Body" body=""
	I1212 00:33:07.970648  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:07.970982  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:08.470661  525066 type.go:168] "Request Body" body=""
	I1212 00:33:08.470757  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:08.471101  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:08.471155  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:08.970834  525066 type.go:168] "Request Body" body=""
	I1212 00:33:08.970915  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:08.971261  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:09.471007  525066 type.go:168] "Request Body" body=""
	I1212 00:33:09.471080  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:09.471383  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:09.971222  525066 type.go:168] "Request Body" body=""
	I1212 00:33:09.971292  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:09.971613  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:10.470464  525066 type.go:168] "Request Body" body=""
	I1212 00:33:10.470536  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:10.470871  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:10.970418  525066 type.go:168] "Request Body" body=""
	I1212 00:33:10.970483  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:10.970802  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:10.970858  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:11.470473  525066 type.go:168] "Request Body" body=""
	I1212 00:33:11.470545  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:11.470900  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:11.970438  525066 type.go:168] "Request Body" body=""
	I1212 00:33:11.970517  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:11.970868  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:12.470560  525066 type.go:168] "Request Body" body=""
	I1212 00:33:12.470632  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:12.470928  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:12.970459  525066 type.go:168] "Request Body" body=""
	I1212 00:33:12.970538  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:12.970885  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:12.970943  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:13.470422  525066 type.go:168] "Request Body" body=""
	I1212 00:33:13.470501  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:13.470840  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:13.970429  525066 type.go:168] "Request Body" body=""
	I1212 00:33:13.970498  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:13.970796  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:14.470442  525066 type.go:168] "Request Body" body=""
	I1212 00:33:14.470515  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:14.470869  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:14.970766  525066 type.go:168] "Request Body" body=""
	I1212 00:33:14.970842  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:14.971184  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:14.971238  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:15.470940  525066 type.go:168] "Request Body" body=""
	I1212 00:33:15.471011  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:15.471271  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:15.971043  525066 type.go:168] "Request Body" body=""
	I1212 00:33:15.971115  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:15.971480  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:16.471281  525066 type.go:168] "Request Body" body=""
	I1212 00:33:16.471357  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:16.471735  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:16.970410  525066 type.go:168] "Request Body" body=""
	I1212 00:33:16.970477  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:16.970775  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:17.470446  525066 type.go:168] "Request Body" body=""
	I1212 00:33:17.470524  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:17.470842  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:17.470887  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:17.970551  525066 type.go:168] "Request Body" body=""
	I1212 00:33:17.970623  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:17.970977  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:18.470481  525066 type.go:168] "Request Body" body=""
	I1212 00:33:18.470557  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:18.470882  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:18.970446  525066 type.go:168] "Request Body" body=""
	I1212 00:33:18.970516  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:18.970881  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:19.470588  525066 type.go:168] "Request Body" body=""
	I1212 00:33:19.470670  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:19.471039  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:19.471096  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:19.970916  525066 type.go:168] "Request Body" body=""
	I1212 00:33:19.971010  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:19.971330  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:20.471261  525066 type.go:168] "Request Body" body=""
	I1212 00:33:20.471340  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:20.471686  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:20.970404  525066 type.go:168] "Request Body" body=""
	I1212 00:33:20.970481  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:20.970812  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:21.470406  525066 type.go:168] "Request Body" body=""
	I1212 00:33:21.470472  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:21.470744  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:21.970441  525066 type.go:168] "Request Body" body=""
	I1212 00:33:21.970514  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:21.970842  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:21.970902  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:22.470458  525066 type.go:168] "Request Body" body=""
	I1212 00:33:22.470541  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:22.470874  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:22.970430  525066 type.go:168] "Request Body" body=""
	I1212 00:33:22.970499  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:22.970829  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:23.470435  525066 type.go:168] "Request Body" body=""
	I1212 00:33:23.470504  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:23.470830  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:23.970480  525066 type.go:168] "Request Body" body=""
	I1212 00:33:23.970555  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:23.970905  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:23.970965  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:24.470433  525066 type.go:168] "Request Body" body=""
	I1212 00:33:24.470506  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:24.470783  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:24.970803  525066 type.go:168] "Request Body" body=""
	I1212 00:33:24.970886  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:24.971251  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:25.471119  525066 type.go:168] "Request Body" body=""
	I1212 00:33:25.471192  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:25.471497  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:25.971209  525066 type.go:168] "Request Body" body=""
	I1212 00:33:25.971285  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:25.971580  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:25.971621  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:26.470369  525066 type.go:168] "Request Body" body=""
	I1212 00:33:26.470445  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:26.470776  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:26.970409  525066 type.go:168] "Request Body" body=""
	I1212 00:33:26.970481  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:26.970791  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:27.470405  525066 type.go:168] "Request Body" body=""
	I1212 00:33:27.470473  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:27.470753  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:27.970463  525066 type.go:168] "Request Body" body=""
	I1212 00:33:27.970537  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:27.970900  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:28.470443  525066 type.go:168] "Request Body" body=""
	I1212 00:33:28.470515  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:28.470860  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:28.470912  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:28.970541  525066 type.go:168] "Request Body" body=""
	I1212 00:33:28.970608  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:28.970887  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:29.470489  525066 type.go:168] "Request Body" body=""
	I1212 00:33:29.470563  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:29.470897  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:29.970848  525066 type.go:168] "Request Body" body=""
	I1212 00:33:29.970931  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:29.971286  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:30.470592  525066 type.go:168] "Request Body" body=""
	I1212 00:33:30.470659  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:30.470963  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:30.471010  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:30.970450  525066 type.go:168] "Request Body" body=""
	I1212 00:33:30.970524  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:30.970868  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:31.470588  525066 type.go:168] "Request Body" body=""
	I1212 00:33:31.470666  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:31.471003  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:31.970391  525066 type.go:168] "Request Body" body=""
	I1212 00:33:31.970461  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:31.970788  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:32.470477  525066 type.go:168] "Request Body" body=""
	I1212 00:33:32.470550  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:32.470900  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:32.970483  525066 type.go:168] "Request Body" body=""
	I1212 00:33:32.970557  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:32.970910  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:32.970970  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:33.470618  525066 type.go:168] "Request Body" body=""
	I1212 00:33:33.470712  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:33.470974  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:33.970444  525066 type.go:168] "Request Body" body=""
	I1212 00:33:33.970517  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:33.970882  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:34.470466  525066 type.go:168] "Request Body" body=""
	I1212 00:33:34.470544  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:34.470888  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:34.974811  525066 type.go:168] "Request Body" body=""
	I1212 00:33:34.974888  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:34.975210  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:34.975263  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:35.470393  525066 type.go:168] "Request Body" body=""
	I1212 00:33:35.470475  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:35.470774  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:35.970447  525066 type.go:168] "Request Body" body=""
	I1212 00:33:35.970520  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:35.970921  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:36.470630  525066 type.go:168] "Request Body" body=""
	I1212 00:33:36.470714  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:36.470977  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:36.970447  525066 type.go:168] "Request Body" body=""
	I1212 00:33:36.970519  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:36.970841  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:37.470461  525066 type.go:168] "Request Body" body=""
	I1212 00:33:37.470545  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:37.470921  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:37.470982  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:37.970408  525066 type.go:168] "Request Body" body=""
	I1212 00:33:37.970475  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:37.970748  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:38.470525  525066 type.go:168] "Request Body" body=""
	I1212 00:33:38.470598  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:38.470966  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:38.970662  525066 type.go:168] "Request Body" body=""
	I1212 00:33:38.970757  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:38.971088  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:39.470794  525066 type.go:168] "Request Body" body=""
	I1212 00:33:39.470866  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:39.471135  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:39.471185  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:39.971164  525066 type.go:168] "Request Body" body=""
	I1212 00:33:39.971246  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:39.971584  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:40.470516  525066 type.go:168] "Request Body" body=""
	I1212 00:33:40.470591  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:40.470924  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:40.970415  525066 type.go:168] "Request Body" body=""
	I1212 00:33:40.970485  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:40.971060  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:41.470451  525066 type.go:168] "Request Body" body=""
	I1212 00:33:41.470587  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:41.470946  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:41.970903  525066 type.go:168] "Request Body" body=""
	I1212 00:33:41.970980  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:41.971291  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:41.971344  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:42.471094  525066 type.go:168] "Request Body" body=""
	I1212 00:33:42.471178  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:42.471572  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:42.971249  525066 type.go:168] "Request Body" body=""
	I1212 00:33:42.971320  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:42.971651  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:43.470400  525066 type.go:168] "Request Body" body=""
	I1212 00:33:43.470476  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:43.470790  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:43.970434  525066 type.go:168] "Request Body" body=""
	I1212 00:33:43.970503  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:43.970849  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:44.470442  525066 type.go:168] "Request Body" body=""
	I1212 00:33:44.470512  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:44.470828  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:44.470882  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:44.970788  525066 type.go:168] "Request Body" body=""
	I1212 00:33:44.970861  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:44.971179  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:45.471027  525066 type.go:168] "Request Body" body=""
	I1212 00:33:45.471095  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:45.471384  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:45.971175  525066 type.go:168] "Request Body" body=""
	I1212 00:33:45.971254  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:45.971602  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:46.471390  525066 type.go:168] "Request Body" body=""
	I1212 00:33:46.471469  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:46.471785  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:46.471843  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:46.970475  525066 type.go:168] "Request Body" body=""
	I1212 00:33:46.970546  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:46.970906  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:47.470431  525066 type.go:168] "Request Body" body=""
	I1212 00:33:47.470507  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:47.470868  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:47.970663  525066 type.go:168] "Request Body" body=""
	I1212 00:33:47.970759  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:47.971097  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:48.470801  525066 type.go:168] "Request Body" body=""
	I1212 00:33:48.470875  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:48.471136  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:48.970447  525066 type.go:168] "Request Body" body=""
	I1212 00:33:48.970518  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:48.970883  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:48.970943  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:49.470484  525066 type.go:168] "Request Body" body=""
	I1212 00:33:49.470558  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:49.470901  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:49.970711  525066 type.go:168] "Request Body" body=""
	I1212 00:33:49.970783  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:49.971100  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:50.471121  525066 type.go:168] "Request Body" body=""
	I1212 00:33:50.471191  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:50.471492  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:50.971285  525066 type.go:168] "Request Body" body=""
	I1212 00:33:50.971368  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:50.971704  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:50.971758  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:51.470388  525066 type.go:168] "Request Body" body=""
	I1212 00:33:51.470461  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:51.470790  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:51.970459  525066 type.go:168] "Request Body" body=""
	I1212 00:33:51.970536  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:51.970878  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:52.470599  525066 type.go:168] "Request Body" body=""
	I1212 00:33:52.470668  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:52.471040  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:52.970519  525066 type.go:168] "Request Body" body=""
	I1212 00:33:52.970595  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:52.970943  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:53.470479  525066 type.go:168] "Request Body" body=""
	I1212 00:33:53.470557  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:53.470898  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:53.470998  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:53.970658  525066 type.go:168] "Request Body" body=""
	I1212 00:33:53.970752  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:53.971087  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:54.470382  525066 type.go:168] "Request Body" body=""
	I1212 00:33:54.470475  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:54.470745  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:54.971392  525066 type.go:168] "Request Body" body=""
	I1212 00:33:54.971460  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:54.971785  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:55.470484  525066 type.go:168] "Request Body" body=""
	I1212 00:33:55.470559  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:55.470901  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:55.970438  525066 type.go:168] "Request Body" body=""
	I1212 00:33:55.970506  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:55.970773  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:55.970819  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:56.470532  525066 type.go:168] "Request Body" body=""
	I1212 00:33:56.470620  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:56.470993  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:56.970746  525066 type.go:168] "Request Body" body=""
	I1212 00:33:56.970823  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:56.971164  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:57.470798  525066 type.go:168] "Request Body" body=""
	I1212 00:33:57.470887  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:57.471201  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:57.970983  525066 type.go:168] "Request Body" body=""
	I1212 00:33:57.971057  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:57.971379  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:57.971435  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:58.471288  525066 type.go:168] "Request Body" body=""
	I1212 00:33:58.471374  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:58.471710  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:58.970417  525066 type.go:168] "Request Body" body=""
	I1212 00:33:58.970485  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:58.970786  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:59.470436  525066 type.go:168] "Request Body" body=""
	I1212 00:33:59.470537  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:59.470835  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:59.970792  525066 type.go:168] "Request Body" body=""
	I1212 00:33:59.970864  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:59.971190  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:00.471100  525066 type.go:168] "Request Body" body=""
	I1212 00:34:00.471178  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:00.471446  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:00.471491  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:00.971317  525066 type.go:168] "Request Body" body=""
	I1212 00:34:00.971392  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:00.971704  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:01.470405  525066 type.go:168] "Request Body" body=""
	I1212 00:34:01.470478  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:01.470824  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:01.970499  525066 type.go:168] "Request Body" body=""
	I1212 00:34:01.970587  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:01.970878  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:02.470450  525066 type.go:168] "Request Body" body=""
	I1212 00:34:02.470548  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:02.470875  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:02.970486  525066 type.go:168] "Request Body" body=""
	I1212 00:34:02.970558  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:02.970903  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:02.970960  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:03.470466  525066 type.go:168] "Request Body" body=""
	I1212 00:34:03.470543  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:03.470886  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:03.970447  525066 type.go:168] "Request Body" body=""
	I1212 00:34:03.970517  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:03.970835  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:04.470526  525066 type.go:168] "Request Body" body=""
	I1212 00:34:04.470601  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:04.470936  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:04.970883  525066 type.go:168] "Request Body" body=""
	I1212 00:34:04.970968  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:04.971228  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:04.971278  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:05.471031  525066 type.go:168] "Request Body" body=""
	I1212 00:34:05.471105  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:05.471416  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:05.971158  525066 type.go:168] "Request Body" body=""
	I1212 00:34:05.971232  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:05.971554  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:06.471186  525066 type.go:168] "Request Body" body=""
	I1212 00:34:06.471254  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:06.471579  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:06.971380  525066 type.go:168] "Request Body" body=""
	I1212 00:34:06.971454  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:06.971795  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:06.971845  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:07.470453  525066 type.go:168] "Request Body" body=""
	I1212 00:34:07.470532  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:07.470891  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:07.970570  525066 type.go:168] "Request Body" body=""
	I1212 00:34:07.970640  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:07.970974  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:08.470454  525066 type.go:168] "Request Body" body=""
	I1212 00:34:08.470526  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:08.470855  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:08.970451  525066 type.go:168] "Request Body" body=""
	I1212 00:34:08.970523  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:08.970873  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:09.470384  525066 type.go:168] "Request Body" body=""
	I1212 00:34:09.470463  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:09.470759  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:09.470810  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:09.970476  525066 type.go:168] "Request Body" body=""
	I1212 00:34:09.970548  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:09.970914  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:10.470339  525066 type.go:168] "Request Body" body=""
	I1212 00:34:10.470412  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:10.470749  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:10.970418  525066 type.go:168] "Request Body" body=""
	I1212 00:34:10.970489  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:10.970837  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:11.470433  525066 type.go:168] "Request Body" body=""
	I1212 00:34:11.470511  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:11.470863  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:11.470914  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:11.970593  525066 type.go:168] "Request Body" body=""
	I1212 00:34:11.970672  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:11.971074  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:12.470513  525066 type.go:168] "Request Body" body=""
	I1212 00:34:12.470580  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:12.470938  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:12.970456  525066 type.go:168] "Request Body" body=""
	I1212 00:34:12.970537  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:12.970882  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:13.470574  525066 type.go:168] "Request Body" body=""
	I1212 00:34:13.470650  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:13.471032  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:13.471090  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:13.970419  525066 type.go:168] "Request Body" body=""
	I1212 00:34:13.970498  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:13.970791  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:14.470474  525066 type.go:168] "Request Body" body=""
	I1212 00:34:14.470543  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:14.470845  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:14.970750  525066 type.go:168] "Request Body" body=""
	I1212 00:34:14.970824  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:14.971143  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:15.470788  525066 type.go:168] "Request Body" body=""
	I1212 00:34:15.470856  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:15.471125  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:15.471166  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:15.970468  525066 type.go:168] "Request Body" body=""
	I1212 00:34:15.970542  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:15.970859  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:16.470484  525066 type.go:168] "Request Body" body=""
	I1212 00:34:16.470555  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:16.470882  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:16.970423  525066 type.go:168] "Request Body" body=""
	I1212 00:34:16.970496  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:16.970812  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:17.470487  525066 type.go:168] "Request Body" body=""
	I1212 00:34:17.470558  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:17.470975  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:17.970713  525066 type.go:168] "Request Body" body=""
	I1212 00:34:17.970796  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:17.971146  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:17.971201  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:18.470794  525066 type.go:168] "Request Body" body=""
	I1212 00:34:18.470865  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:18.471131  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:18.970459  525066 type.go:168] "Request Body" body=""
	I1212 00:34:18.970539  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:18.970892  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:19.470617  525066 type.go:168] "Request Body" body=""
	I1212 00:34:19.470700  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:19.471001  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:19.970879  525066 type.go:168] "Request Body" body=""
	I1212 00:34:19.970960  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:19.971231  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:19.971282  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:20.470446  525066 type.go:168] "Request Body" body=""
	I1212 00:34:20.470523  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:20.470854  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:20.970561  525066 type.go:168] "Request Body" body=""
	I1212 00:34:20.970634  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:20.970964  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:21.470548  525066 type.go:168] "Request Body" body=""
	I1212 00:34:21.470625  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:21.470970  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:21.970472  525066 type.go:168] "Request Body" body=""
	I1212 00:34:21.970545  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:21.970883  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:22.470450  525066 type.go:168] "Request Body" body=""
	I1212 00:34:22.470526  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:22.470863  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:22.470920  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:22.970409  525066 type.go:168] "Request Body" body=""
	I1212 00:34:22.970483  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:22.970826  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:23.470427  525066 type.go:168] "Request Body" body=""
	I1212 00:34:23.470503  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:23.470827  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:23.970523  525066 type.go:168] "Request Body" body=""
	I1212 00:34:23.970600  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:23.970934  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:24.470590  525066 type.go:168] "Request Body" body=""
	I1212 00:34:24.470656  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:24.470938  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:24.470979  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:24.971061  525066 type.go:168] "Request Body" body=""
	I1212 00:34:24.971133  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:24.971465  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:25.471283  525066 type.go:168] "Request Body" body=""
	I1212 00:34:25.471358  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:25.471678  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:25.970368  525066 type.go:168] "Request Body" body=""
	I1212 00:34:25.970434  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:25.970734  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:26.470429  525066 type.go:168] "Request Body" body=""
	I1212 00:34:26.470509  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:26.470892  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:26.970602  525066 type.go:168] "Request Body" body=""
	I1212 00:34:26.970714  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:26.971045  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:26.971097  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:27.470596  525066 type.go:168] "Request Body" body=""
	I1212 00:34:27.470664  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:27.470983  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:27.970426  525066 type.go:168] "Request Body" body=""
	I1212 00:34:27.970519  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:27.970812  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:28.470515  525066 type.go:168] "Request Body" body=""
	I1212 00:34:28.470605  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:28.470981  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:28.970522  525066 type.go:168] "Request Body" body=""
	I1212 00:34:28.970588  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:28.970857  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:29.470468  525066 type.go:168] "Request Body" body=""
	I1212 00:34:29.470544  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:29.470864  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:29.470919  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:29.970798  525066 type.go:168] "Request Body" body=""
	I1212 00:34:29.970869  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:29.971213  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:30.471137  525066 type.go:168] "Request Body" body=""
	I1212 00:34:30.471225  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:30.471550  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:30.971357  525066 type.go:168] "Request Body" body=""
	I1212 00:34:30.971431  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:30.971742  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:31.470472  525066 type.go:168] "Request Body" body=""
	I1212 00:34:31.470560  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:31.470967  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:31.471019  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:31.970675  525066 type.go:168] "Request Body" body=""
	I1212 00:34:31.970764  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:31.971052  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:32.470453  525066 type.go:168] "Request Body" body=""
	I1212 00:34:32.470527  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:32.470874  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:32.970593  525066 type.go:168] "Request Body" body=""
	I1212 00:34:32.970672  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:32.971032  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:33.470510  525066 type.go:168] "Request Body" body=""
	I1212 00:34:33.470583  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:33.470858  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:33.970552  525066 type.go:168] "Request Body" body=""
	I1212 00:34:33.970631  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:33.970999  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:33.971054  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:34.470581  525066 type.go:168] "Request Body" body=""
	I1212 00:34:34.470663  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:34.471029  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:34.970856  525066 type.go:168] "Request Body" body=""
	I1212 00:34:34.970934  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:34.971203  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:35.470956  525066 type.go:168] "Request Body" body=""
	I1212 00:34:35.471029  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:35.471364  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:35.971153  525066 type.go:168] "Request Body" body=""
	I1212 00:34:35.971231  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:35.971540  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:35.971597  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:36.471328  525066 type.go:168] "Request Body" body=""
	I1212 00:34:36.471400  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:36.471724  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:36.970408  525066 type.go:168] "Request Body" body=""
	I1212 00:34:36.970487  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:36.970849  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:37.470460  525066 type.go:168] "Request Body" body=""
	I1212 00:34:37.470535  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:37.470898  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:37.970425  525066 type.go:168] "Request Body" body=""
	I1212 00:34:37.970536  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:37.970843  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:38.470574  525066 type.go:168] "Request Body" body=""
	I1212 00:34:38.470652  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:38.470997  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:38.471051  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:38.970762  525066 type.go:168] "Request Body" body=""
	I1212 00:34:38.970837  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:38.971165  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:39.470821  525066 type.go:168] "Request Body" body=""
	I1212 00:34:39.470925  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:39.471276  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:39.971132  525066 type.go:168] "Request Body" body=""
	I1212 00:34:39.971208  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:39.971504  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:40.470403  525066 type.go:168] "Request Body" body=""
	I1212 00:34:40.470483  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:40.470859  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:40.970437  525066 type.go:168] "Request Body" body=""
	I1212 00:34:40.970509  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:40.970780  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:40.970828  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:41.470472  525066 type.go:168] "Request Body" body=""
	I1212 00:34:41.470541  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:41.471156  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:41.970992  525066 type.go:168] "Request Body" body=""
	I1212 00:34:41.971063  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:41.971400  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:42.471111  525066 type.go:168] "Request Body" body=""
	I1212 00:34:42.471182  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:42.471437  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:42.971249  525066 type.go:168] "Request Body" body=""
	I1212 00:34:42.971318  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:42.971637  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:42.971693  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:43.470373  525066 type.go:168] "Request Body" body=""
	I1212 00:34:43.470452  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:43.470770  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:43.970406  525066 type.go:168] "Request Body" body=""
	I1212 00:34:43.970484  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:43.970813  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:44.470444  525066 type.go:168] "Request Body" body=""
	I1212 00:34:44.470522  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:44.470871  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:44.970718  525066 type.go:168] "Request Body" body=""
	I1212 00:34:44.970796  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:44.971129  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:45.470791  525066 type.go:168] "Request Body" body=""
	I1212 00:34:45.470857  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:45.471157  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:45.471208  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:45.971058  525066 type.go:168] "Request Body" body=""
	I1212 00:34:45.971144  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:45.971575  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:46.471360  525066 type.go:168] "Request Body" body=""
	I1212 00:34:46.471430  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:46.471804  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:46.970483  525066 type.go:168] "Request Body" body=""
	I1212 00:34:46.970548  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:46.970854  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:47.470504  525066 type.go:168] "Request Body" body=""
	I1212 00:34:47.470579  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:47.470919  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:47.970616  525066 type.go:168] "Request Body" body=""
	I1212 00:34:47.970715  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:47.971061  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:47.971117  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:48.470377  525066 type.go:168] "Request Body" body=""
	I1212 00:34:48.470445  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:48.470744  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:48.970454  525066 type.go:168] "Request Body" body=""
	I1212 00:34:48.970525  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:48.970871  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:49.470443  525066 type.go:168] "Request Body" body=""
	I1212 00:34:49.470517  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:49.470842  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:49.970842  525066 type.go:168] "Request Body" body=""
	I1212 00:34:49.970921  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:49.971185  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:49.971235  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:50.471289  525066 type.go:168] "Request Body" body=""
	I1212 00:34:50.471363  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:50.471683  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:50.970385  525066 type.go:168] "Request Body" body=""
	I1212 00:34:50.970470  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:50.970820  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:51.470498  525066 type.go:168] "Request Body" body=""
	I1212 00:34:51.470577  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:51.470901  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:51.970643  525066 type.go:168] "Request Body" body=""
	I1212 00:34:51.970737  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:51.971081  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:52.470467  525066 type.go:168] "Request Body" body=""
	I1212 00:34:52.470543  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:52.470852  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:52.470906  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:52.970347  525066 type.go:168] "Request Body" body=""
	I1212 00:34:52.970413  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:52.970656  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:53.470361  525066 type.go:168] "Request Body" body=""
	I1212 00:34:53.470433  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:53.470758  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:53.970445  525066 type.go:168] "Request Body" body=""
	I1212 00:34:53.970521  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:53.970846  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:54.470385  525066 type.go:168] "Request Body" body=""
	I1212 00:34:54.470456  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:54.470728  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:54.970731  525066 type.go:168] "Request Body" body=""
	I1212 00:34:54.970808  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:54.971141  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:54.971194  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:55.470962  525066 type.go:168] "Request Body" body=""
	I1212 00:34:55.471032  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:55.471384  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:55.971164  525066 type.go:168] "Request Body" body=""
	I1212 00:34:55.971235  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:55.971578  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:56.471360  525066 type.go:168] "Request Body" body=""
	I1212 00:34:56.471429  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:56.471744  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:56.970433  525066 type.go:168] "Request Body" body=""
	I1212 00:34:56.970514  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:56.970841  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:57.470322  525066 type.go:168] "Request Body" body=""
	I1212 00:34:57.470393  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:57.470705  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:57.470754  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:57.970446  525066 type.go:168] "Request Body" body=""
	I1212 00:34:57.970517  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:57.970859  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:58.470584  525066 type.go:168] "Request Body" body=""
	I1212 00:34:58.470658  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:58.470977  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:58.970390  525066 type.go:168] "Request Body" body=""
	I1212 00:34:58.970459  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:58.970753  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:59.470435  525066 type.go:168] "Request Body" body=""
	I1212 00:34:59.470506  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:59.470847  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:59.470910  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:59.970877  525066 type.go:168] "Request Body" body=""
	I1212 00:34:59.970974  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:59.971302  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:00.471231  525066 type.go:168] "Request Body" body=""
	I1212 00:35:00.471314  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:00.471616  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:00.970336  525066 type.go:168] "Request Body" body=""
	I1212 00:35:00.970416  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:00.970781  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:01.470511  525066 type.go:168] "Request Body" body=""
	I1212 00:35:01.470600  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:01.470948  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:35:01.471002  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:35:01.970437  525066 type.go:168] "Request Body" body=""
	I1212 00:35:01.970523  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:01.970847  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:02.470535  525066 type.go:168] "Request Body" body=""
	I1212 00:35:02.470612  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:02.470975  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:02.970668  525066 type.go:168] "Request Body" body=""
	I1212 00:35:02.970763  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:02.971094  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:03.470416  525066 type.go:168] "Request Body" body=""
	I1212 00:35:03.470482  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:03.470744  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:03.970472  525066 type.go:168] "Request Body" body=""
	I1212 00:35:03.970553  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:03.970942  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:35:03.971000  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:35:04.470451  525066 type.go:168] "Request Body" body=""
	I1212 00:35:04.470558  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:04.470919  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:04.970804  525066 type.go:168] "Request Body" body=""
	I1212 00:35:04.970871  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:04.971144  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:05.470467  525066 type.go:168] "Request Body" body=""
	I1212 00:35:05.470537  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:05.470942  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:05.970499  525066 type.go:168] "Request Body" body=""
	I1212 00:35:05.970585  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:05.970938  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:06.470332  525066 type.go:168] "Request Body" body=""
	I1212 00:35:06.470408  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:06.470730  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:35:06.470781  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:35:06.970454  525066 type.go:168] "Request Body" body=""
	I1212 00:35:06.970525  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:06.970921  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:07.470487  525066 type.go:168] "Request Body" body=""
	I1212 00:35:07.470570  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:07.470917  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:07.970583  525066 type.go:168] "Request Body" body=""
	I1212 00:35:07.970653  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:07.970985  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:08.470669  525066 type.go:168] "Request Body" body=""
	I1212 00:35:08.470768  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:08.471111  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:35:08.471164  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:35:08.970674  525066 type.go:168] "Request Body" body=""
	I1212 00:35:08.970770  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:08.971117  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:09.470791  525066 type.go:168] "Request Body" body=""
	I1212 00:35:09.470928  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:09.471187  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:09.971168  525066 type.go:168] "Request Body" body=""
	I1212 00:35:09.971242  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:09.971558  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:10.470457  525066 type.go:168] "Request Body" body=""
	I1212 00:35:10.470566  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:10.470928  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:10.970414  525066 type.go:168] "Request Body" body=""
	I1212 00:35:10.970483  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:10.970810  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:35:10.970861  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:35:11.470561  525066 type.go:168] "Request Body" body=""
	I1212 00:35:11.470643  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:11.471038  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:11.970830  525066 type.go:168] "Request Body" body=""
	I1212 00:35:11.970907  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:11.971183  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:12.470991  525066 type.go:168] "Request Body" body=""
	I1212 00:35:12.471059  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:12.471390  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:12.971182  525066 type.go:168] "Request Body" body=""
	I1212 00:35:12.971260  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:12.971601  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:35:12.971680  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:35:13.471284  525066 type.go:168] "Request Body" body=""
	I1212 00:35:13.471356  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:13.471730  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:13.970398  525066 type.go:168] "Request Body" body=""
	I1212 00:35:13.970477  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:13.970795  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:14.470449  525066 type.go:168] "Request Body" body=""
	I1212 00:35:14.470529  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:14.470838  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:14.970781  525066 type.go:168] "Request Body" body=""
	I1212 00:35:14.970875  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:14.971268  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:15.471033  525066 type.go:168] "Request Body" body=""
	I1212 00:35:15.471104  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:15.471367  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:35:15.471407  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:35:15.971146  525066 type.go:168] "Request Body" body=""
	I1212 00:35:15.971216  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:15.971526  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:16.471298  525066 type.go:168] "Request Body" body=""
	I1212 00:35:16.471376  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:16.471748  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:16.970401  525066 type.go:168] "Request Body" body=""
	I1212 00:35:16.970468  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:16.970761  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:17.470431  525066 type.go:168] "Request Body" body=""
	I1212 00:35:17.470501  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:17.470854  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:17.970433  525066 type.go:168] "Request Body" body=""
	I1212 00:35:17.970504  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:17.970880  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:35:17.970936  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:35:18.470421  525066 type.go:168] "Request Body" body=""
	I1212 00:35:18.470491  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:18.470768  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:18.970447  525066 type.go:168] "Request Body" body=""
	I1212 00:35:18.970526  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:18.970872  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:19.470463  525066 type.go:168] "Request Body" body=""
	I1212 00:35:19.470546  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:19.470905  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:19.970857  525066 type.go:168] "Request Body" body=""
	I1212 00:35:19.970930  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:19.971189  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:35:19.971235  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:35:20.471222  525066 type.go:168] "Request Body" body=""
	I1212 00:35:20.471296  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:20.471592  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:20.971375  525066 type.go:168] "Request Body" body=""
	I1212 00:35:20.971447  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:20.971753  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:21.470418  525066 type.go:168] "Request Body" body=""
	I1212 00:35:21.470490  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:21.470805  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:21.970405  525066 type.go:168] "Request Body" body=""
	I1212 00:35:21.970484  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:21.970793  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:22.470400  525066 type.go:168] "Request Body" body=""
	I1212 00:35:22.470486  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:22.470834  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:35:22.470893  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:35:22.970432  525066 type.go:168] "Request Body" body=""
	I1212 00:35:22.970507  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:22.970864  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:23.470610  525066 type.go:168] "Request Body" body=""
	I1212 00:35:23.470694  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:23.471022  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:23.970472  525066 type.go:168] "Request Body" body=""
	I1212 00:35:23.970544  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:23.970934  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:24.470526  525066 type.go:168] "Request Body" body=""
	I1212 00:35:24.470602  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:24.470885  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:35:24.470937  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:35:24.970814  525066 type.go:168] "Request Body" body=""
	I1212 00:35:24.970900  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:24.971212  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:25.470981  525066 type.go:168] "Request Body" body=""
	I1212 00:35:25.471083  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:25.471412  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:25.971186  525066 type.go:168] "Request Body" body=""
	I1212 00:35:25.971270  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:25.971542  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:26.471296  525066 type.go:168] "Request Body" body=""
	I1212 00:35:26.471372  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:26.471691  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:35:26.471748  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:35:26.970425  525066 type.go:168] "Request Body" body=""
	I1212 00:35:26.970494  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:26.970788  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:27.470473  525066 type.go:168] "Request Body" body=""
	I1212 00:35:27.470545  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:27.470900  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:27.970608  525066 type.go:168] "Request Body" body=""
	I1212 00:35:27.970694  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:27.971049  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:28.470775  525066 type.go:168] "Request Body" body=""
	I1212 00:35:28.470856  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:28.471187  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:28.970958  525066 type.go:168] "Request Body" body=""
	I1212 00:35:28.971022  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:28.971277  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:35:28.971316  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:35:29.471162  525066 type.go:168] "Request Body" body=""
	I1212 00:35:29.471240  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:29.471593  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:29.970376  525066 type.go:168] "Request Body" body=""
	I1212 00:35:29.970454  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:29.970816  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:30.471109  525066 type.go:168] "Request Body" body=""
	I1212 00:35:30.471183  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:30.471480  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:30.971287  525066 type.go:168] "Request Body" body=""
	I1212 00:35:30.971360  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:30.971672  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:35:30.971729  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:35:31.470405  525066 type.go:168] "Request Body" body=""
	I1212 00:35:31.470485  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:31.470830  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:31.970549  525066 type.go:168] "Request Body" body=""
	I1212 00:35:31.970619  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:31.970957  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:32.470649  525066 type.go:168] "Request Body" body=""
	I1212 00:35:32.470745  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:32.471093  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:32.970460  525066 type.go:168] "Request Body" body=""
	I1212 00:35:32.970533  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:32.970861  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:33.470443  525066 type.go:168] "Request Body" body=""
	I1212 00:35:33.470511  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:33.470783  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:35:33.470825  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:35:33.970454  525066 type.go:168] "Request Body" body=""
	I1212 00:35:33.970535  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:33.970883  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:34.470595  525066 type.go:168] "Request Body" body=""
	I1212 00:35:34.470673  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:34.471021  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:34.970778  525066 type.go:168] "Request Body" body=""
	I1212 00:35:34.970845  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:34.971108  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:35.470789  525066 type.go:168] "Request Body" body=""
	I1212 00:35:35.470893  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:35.471408  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:35:35.471455  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:35:35.971178  525066 type.go:168] "Request Body" body=""
	I1212 00:35:35.971249  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:35.971545  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:36.471287  525066 type.go:168] "Request Body" body=""
	I1212 00:35:36.471358  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:36.471623  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:36.970386  525066 type.go:168] "Request Body" body=""
	I1212 00:35:36.970468  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:36.970815  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:37.470527  525066 type.go:168] "Request Body" body=""
	I1212 00:35:37.470612  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:37.470950  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:37.970440  525066 type.go:168] "Request Body" body=""
	I1212 00:35:37.970510  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:37.970824  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:35:37.970880  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:35:38.470422  525066 type.go:168] "Request Body" body=""
	I1212 00:35:38.470503  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:38.470828  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:38.970459  525066 type.go:168] "Request Body" body=""
	I1212 00:35:38.970529  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:38.970885  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:39.470567  525066 type.go:168] "Request Body" body=""
	I1212 00:35:39.470634  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:39.470915  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:39.971016  525066 type.go:168] "Request Body" body=""
	I1212 00:35:39.971090  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:39.971449  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:35:39.971507  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:35:40.470458  525066 type.go:168] "Request Body" body=""
	I1212 00:35:40.470535  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:40.470907  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:40.970388  525066 type.go:168] "Request Body" body=""
	I1212 00:35:40.970449  525066 node_ready.go:38] duration metric: took 6m0.000230679s for node "functional-035643" to be "Ready" ...
	I1212 00:35:40.973928  525066 out.go:203] 
	W1212 00:35:40.976747  525066 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1212 00:35:40.976773  525066 out.go:285] * 
	* 
	W1212 00:35:40.981440  525066 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 00:35:40.984739  525066 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:676: failed to soft start minikube. args "out/minikube-linux-arm64 start -p functional-035643 --alsologtostderr -v=8": exit status 80
functional_test.go:678: soft start took 6m6.778498817s for "functional-035643" cluster.
I1212 00:35:41.570092  490954 config.go:182] Loaded profile config "functional-035643": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-035643
helpers_test.go:244: (dbg) docker inspect functional-035643:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "02b8c8e636a5f9c9930bd279101be257c50eb00805c3c8fd0285e10206ff115a",
	        "Created": "2025-12-12T00:21:16.539894649Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 519641,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T00:21:16.600605162Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/02b8c8e636a5f9c9930bd279101be257c50eb00805c3c8fd0285e10206ff115a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/02b8c8e636a5f9c9930bd279101be257c50eb00805c3c8fd0285e10206ff115a/hostname",
	        "HostsPath": "/var/lib/docker/containers/02b8c8e636a5f9c9930bd279101be257c50eb00805c3c8fd0285e10206ff115a/hosts",
	        "LogPath": "/var/lib/docker/containers/02b8c8e636a5f9c9930bd279101be257c50eb00805c3c8fd0285e10206ff115a/02b8c8e636a5f9c9930bd279101be257c50eb00805c3c8fd0285e10206ff115a-json.log",
	        "Name": "/functional-035643",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-035643:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-035643",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "02b8c8e636a5f9c9930bd279101be257c50eb00805c3c8fd0285e10206ff115a",
	                "LowerDir": "/var/lib/docker/overlay2/fac4f7ad025123b29aaa4f718217dd9d72b542fc9949b8afc2ffc326afc38542-init/diff:/var/lib/docker/overlay2/312acdcca8c5c90ada236fa0dd866f841348e5b8485928af37d3628cccc20197/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fac4f7ad025123b29aaa4f718217dd9d72b542fc9949b8afc2ffc326afc38542/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fac4f7ad025123b29aaa4f718217dd9d72b542fc9949b8afc2ffc326afc38542/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fac4f7ad025123b29aaa4f718217dd9d72b542fc9949b8afc2ffc326afc38542/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-035643",
	                "Source": "/var/lib/docker/volumes/functional-035643/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-035643",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-035643",
	                "name.minikube.sigs.k8s.io": "functional-035643",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ede6a17442d6bf83b8f4c9f93f252345cec3d0406f82de2d6bd2cfd4713e2163",
	            "SandboxKey": "/var/run/docker/netns/ede6a17442d6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33183"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33184"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33187"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33185"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33186"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-035643": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "52:d5:12:89:ea:40",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ad01995b183fdebead6c725e2b942ae8ce2d3964b3552789fe5b50ee7e7239a3",
	                    "EndpointID": "d429a1cd0f840d042af4ad7ea0bda6067a342be7fb552083411004a3604b0124",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-035643",
	                        "02b8c8e636a5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-035643 -n functional-035643
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-035643 -n functional-035643: exit status 2 (355.951346ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-035643 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p functional-035643 logs -n 25: (1.119176398s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-921447 image rm kicbase/echo-server:functional-921447 --alsologtostderr                                                                │ functional-921447 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ image          │ functional-921447 image ls                                                                                                                        │ functional-921447 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ ssh            │ functional-921447 ssh sudo cat /etc/test/nested/copy/490954/hosts                                                                                 │ functional-921447 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ image          │ functional-921447 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr                               │ functional-921447 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ ssh            │ functional-921447 ssh sudo cat /etc/ssl/certs/490954.pem                                                                                          │ functional-921447 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ image          │ functional-921447 image ls                                                                                                                        │ functional-921447 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ ssh            │ functional-921447 ssh sudo cat /usr/share/ca-certificates/490954.pem                                                                              │ functional-921447 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ image          │ functional-921447 image save --daemon kicbase/echo-server:functional-921447 --alsologtostderr                                                     │ functional-921447 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ ssh            │ functional-921447 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                          │ functional-921447 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ ssh            │ functional-921447 ssh sudo cat /etc/ssl/certs/4909542.pem                                                                                         │ functional-921447 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ ssh            │ functional-921447 ssh sudo cat /usr/share/ca-certificates/4909542.pem                                                                             │ functional-921447 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ ssh            │ functional-921447 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                          │ functional-921447 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ update-context │ functional-921447 update-context --alsologtostderr -v=2                                                                                           │ functional-921447 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ update-context │ functional-921447 update-context --alsologtostderr -v=2                                                                                           │ functional-921447 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ update-context │ functional-921447 update-context --alsologtostderr -v=2                                                                                           │ functional-921447 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ image          │ functional-921447 image ls --format yaml --alsologtostderr                                                                                        │ functional-921447 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ image          │ functional-921447 image ls --format short --alsologtostderr                                                                                       │ functional-921447 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ image          │ functional-921447 image ls --format json --alsologtostderr                                                                                        │ functional-921447 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ ssh            │ functional-921447 ssh pgrep buildkitd                                                                                                             │ functional-921447 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │                     │
	│ image          │ functional-921447 image ls --format table --alsologtostderr                                                                                       │ functional-921447 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ image          │ functional-921447 image build -t localhost/my-image:functional-921447 testdata/build --alsologtostderr                                            │ functional-921447 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ image          │ functional-921447 image ls                                                                                                                        │ functional-921447 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ delete         │ -p functional-921447                                                                                                                              │ functional-921447 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ start          │ -p functional-035643 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │                     │
	│ start          │ -p functional-035643 --alsologtostderr -v=8                                                                                                       │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:29 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 00:29:34
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 00:29:34.833608  525066 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:29:34.833799  525066 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:29:34.833830  525066 out.go:374] Setting ErrFile to fd 2...
	I1212 00:29:34.833859  525066 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:29:34.834244  525066 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-487723/.minikube/bin
	I1212 00:29:34.834787  525066 out.go:368] Setting JSON to false
	I1212 00:29:34.835727  525066 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":11520,"bootTime":1765487855,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1212 00:29:34.836335  525066 start.go:143] virtualization:  
	I1212 00:29:34.841302  525066 out.go:179] * [functional-035643] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 00:29:34.846669  525066 out.go:179]   - MINIKUBE_LOCATION=22101
	I1212 00:29:34.846785  525066 notify.go:221] Checking for updates...
	I1212 00:29:34.852399  525066 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:29:34.855222  525066 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22101-487723/kubeconfig
	I1212 00:29:34.857924  525066 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-487723/.minikube
	I1212 00:29:34.860585  525066 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 00:29:34.863145  525066 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:29:34.866639  525066 config.go:182] Loaded profile config "functional-035643": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 00:29:34.866818  525066 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 00:29:34.892569  525066 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 00:29:34.892680  525066 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:29:34.954074  525066 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 00:29:34.944774098 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 00:29:34.954186  525066 docker.go:319] overlay module found
	I1212 00:29:34.958427  525066 out.go:179] * Using the docker driver based on existing profile
	I1212 00:29:34.960983  525066 start.go:309] selected driver: docker
	I1212 00:29:34.961005  525066 start.go:927] validating driver "docker" against &{Name:functional-035643 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-035643 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:29:34.961104  525066 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:29:34.961212  525066 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:29:35.019269  525066 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 00:29:35.008770771 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 00:29:35.019716  525066 cni.go:84] Creating CNI manager for ""
	I1212 00:29:35.019778  525066 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 00:29:35.019842  525066 start.go:353] cluster config:
	{Name:functional-035643 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-035643 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:29:35.022879  525066 out.go:179] * Starting "functional-035643" primary control-plane node in "functional-035643" cluster
	I1212 00:29:35.025659  525066 cache.go:134] Beginning downloading kic base image for docker with crio
	I1212 00:29:35.028463  525066 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1212 00:29:35.031434  525066 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1212 00:29:35.031495  525066 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22101-487723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1212 00:29:35.031510  525066 cache.go:65] Caching tarball of preloaded images
	I1212 00:29:35.031544  525066 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1212 00:29:35.031603  525066 preload.go:238] Found /home/jenkins/minikube-integration/22101-487723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1212 00:29:35.031614  525066 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1212 00:29:35.031729  525066 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/config.json ...
	I1212 00:29:35.051219  525066 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1212 00:29:35.051245  525066 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1212 00:29:35.051267  525066 cache.go:243] Successfully downloaded all kic artifacts
	I1212 00:29:35.051303  525066 start.go:360] acquireMachinesLock for functional-035643: {Name:mkb0cdc7d354412594dc63c0234fde00134e758d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 00:29:35.051387  525066 start.go:364] duration metric: took 54.908µs to acquireMachinesLock for "functional-035643"
	I1212 00:29:35.051416  525066 start.go:96] Skipping create...Using existing machine configuration
	I1212 00:29:35.051428  525066 fix.go:54] fixHost starting: 
	I1212 00:29:35.051696  525066 cli_runner.go:164] Run: docker container inspect functional-035643 --format={{.State.Status}}
	I1212 00:29:35.069320  525066 fix.go:112] recreateIfNeeded on functional-035643: state=Running err=<nil>
	W1212 00:29:35.069352  525066 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 00:29:35.072554  525066 out.go:252] * Updating the running docker "functional-035643" container ...
	I1212 00:29:35.072600  525066 machine.go:94] provisionDockerMachine start ...
	I1212 00:29:35.072693  525066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
	I1212 00:29:35.090330  525066 main.go:143] libmachine: Using SSH client type: native
	I1212 00:29:35.090669  525066 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1212 00:29:35.090706  525066 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 00:29:35.238363  525066 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-035643
	
	I1212 00:29:35.238387  525066 ubuntu.go:182] provisioning hostname "functional-035643"
	I1212 00:29:35.238453  525066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
	I1212 00:29:35.256201  525066 main.go:143] libmachine: Using SSH client type: native
	I1212 00:29:35.256511  525066 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1212 00:29:35.256528  525066 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-035643 && echo "functional-035643" | sudo tee /etc/hostname
	I1212 00:29:35.418094  525066 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-035643
	
	I1212 00:29:35.418176  525066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
	I1212 00:29:35.436164  525066 main.go:143] libmachine: Using SSH client type: native
	I1212 00:29:35.436475  525066 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1212 00:29:35.436494  525066 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-035643' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-035643/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-035643' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 00:29:35.594938  525066 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 00:29:35.594969  525066 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22101-487723/.minikube CaCertPath:/home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22101-487723/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22101-487723/.minikube}
	I1212 00:29:35.595009  525066 ubuntu.go:190] setting up certificates
	I1212 00:29:35.595026  525066 provision.go:84] configureAuth start
	I1212 00:29:35.595111  525066 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-035643
	I1212 00:29:35.612398  525066 provision.go:143] copyHostCerts
	I1212 00:29:35.612439  525066 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22101-487723/.minikube/ca.pem
	I1212 00:29:35.612482  525066 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-487723/.minikube/ca.pem, removing ...
	I1212 00:29:35.612494  525066 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-487723/.minikube/ca.pem
	I1212 00:29:35.612571  525066 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22101-487723/.minikube/ca.pem (1078 bytes)
	I1212 00:29:35.612671  525066 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22101-487723/.minikube/cert.pem
	I1212 00:29:35.612699  525066 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-487723/.minikube/cert.pem, removing ...
	I1212 00:29:35.612707  525066 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-487723/.minikube/cert.pem
	I1212 00:29:35.612734  525066 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22101-487723/.minikube/cert.pem (1123 bytes)
	I1212 00:29:35.612781  525066 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22101-487723/.minikube/key.pem
	I1212 00:29:35.612802  525066 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-487723/.minikube/key.pem, removing ...
	I1212 00:29:35.612813  525066 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-487723/.minikube/key.pem
	I1212 00:29:35.612837  525066 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22101-487723/.minikube/key.pem (1679 bytes)
	I1212 00:29:35.612889  525066 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22101-487723/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca-key.pem org=jenkins.functional-035643 san=[127.0.0.1 192.168.49.2 functional-035643 localhost minikube]
	I1212 00:29:35.977748  525066 provision.go:177] copyRemoteCerts
	I1212 00:29:35.977818  525066 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 00:29:35.977857  525066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
	I1212 00:29:35.995348  525066 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/functional-035643/id_rsa Username:docker}
	I1212 00:29:36.106772  525066 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1212 00:29:36.106859  525066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 00:29:36.126035  525066 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1212 00:29:36.126112  525066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 00:29:36.143996  525066 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1212 00:29:36.144114  525066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 00:29:36.161387  525066 provision.go:87] duration metric: took 566.343959ms to configureAuth
	I1212 00:29:36.161415  525066 ubuntu.go:206] setting minikube options for container-runtime
	I1212 00:29:36.161612  525066 config.go:182] Loaded profile config "functional-035643": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 00:29:36.161722  525066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
	I1212 00:29:36.179565  525066 main.go:143] libmachine: Using SSH client type: native
	I1212 00:29:36.179872  525066 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1212 00:29:36.179896  525066 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 00:29:36.525259  525066 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 00:29:36.525285  525066 machine.go:97] duration metric: took 1.45267532s to provisionDockerMachine
	I1212 00:29:36.525297  525066 start.go:293] postStartSetup for "functional-035643" (driver="docker")
	I1212 00:29:36.525310  525066 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 00:29:36.525385  525066 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 00:29:36.525432  525066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
	I1212 00:29:36.544323  525066 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/functional-035643/id_rsa Username:docker}
	I1212 00:29:36.650745  525066 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 00:29:36.654027  525066 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1212 00:29:36.654058  525066 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1212 00:29:36.654063  525066 command_runner.go:130] > VERSION_ID="12"
	I1212 00:29:36.654067  525066 command_runner.go:130] > VERSION="12 (bookworm)"
	I1212 00:29:36.654072  525066 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1212 00:29:36.654076  525066 command_runner.go:130] > ID=debian
	I1212 00:29:36.654081  525066 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1212 00:29:36.654086  525066 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1212 00:29:36.654098  525066 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1212 00:29:36.654164  525066 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 00:29:36.654184  525066 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 00:29:36.654203  525066 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-487723/.minikube/addons for local assets ...
	I1212 00:29:36.654261  525066 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-487723/.minikube/files for local assets ...
	I1212 00:29:36.654368  525066 filesync.go:149] local asset: /home/jenkins/minikube-integration/22101-487723/.minikube/files/etc/ssl/certs/4909542.pem -> 4909542.pem in /etc/ssl/certs
	I1212 00:29:36.654379  525066 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/files/etc/ssl/certs/4909542.pem -> /etc/ssl/certs/4909542.pem
	I1212 00:29:36.654462  525066 filesync.go:149] local asset: /home/jenkins/minikube-integration/22101-487723/.minikube/files/etc/test/nested/copy/490954/hosts -> hosts in /etc/test/nested/copy/490954
	I1212 00:29:36.654470  525066 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/files/etc/test/nested/copy/490954/hosts -> /etc/test/nested/copy/490954/hosts
	I1212 00:29:36.654523  525066 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/490954
	I1212 00:29:36.661942  525066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/files/etc/ssl/certs/4909542.pem --> /etc/ssl/certs/4909542.pem (1708 bytes)
	I1212 00:29:36.678936  525066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/files/etc/test/nested/copy/490954/hosts --> /etc/test/nested/copy/490954/hosts (40 bytes)
	I1212 00:29:36.696209  525066 start.go:296] duration metric: took 170.896684ms for postStartSetup
	I1212 00:29:36.696330  525066 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 00:29:36.696401  525066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
	I1212 00:29:36.716202  525066 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/functional-035643/id_rsa Username:docker}
	I1212 00:29:36.819154  525066 command_runner.go:130] > 18%
	I1212 00:29:36.819742  525066 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 00:29:36.823869  525066 command_runner.go:130] > 160G
	I1212 00:29:36.824320  525066 fix.go:56] duration metric: took 1.772888094s for fixHost
	I1212 00:29:36.824342  525066 start.go:83] releasing machines lock for "functional-035643", held for 1.772938226s
	I1212 00:29:36.824419  525066 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-035643
	I1212 00:29:36.841414  525066 ssh_runner.go:195] Run: cat /version.json
	I1212 00:29:36.841444  525066 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 00:29:36.841465  525066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
	I1212 00:29:36.841499  525066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
	I1212 00:29:36.858975  525066 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/functional-035643/id_rsa Username:docker}
	I1212 00:29:36.864277  525066 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/functional-035643/id_rsa Username:docker}
	I1212 00:29:37.063000  525066 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1212 00:29:37.063067  525066 command_runner.go:130] > {"iso_version": "v1.37.0-1765151505-21409", "kicbase_version": "v0.0.48-1765275396-22083", "minikube_version": "v1.37.0", "commit": "9f3959633d311997d75aab86f8ff840f224c6486"}
	I1212 00:29:37.063223  525066 ssh_runner.go:195] Run: systemctl --version
	I1212 00:29:37.069375  525066 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1212 00:29:37.069421  525066 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1212 00:29:37.069789  525066 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 00:29:37.107153  525066 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1212 00:29:37.111099  525066 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1212 00:29:37.111476  525066 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 00:29:37.111538  525066 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 00:29:37.119321  525066 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 00:29:37.119346  525066 start.go:496] detecting cgroup driver to use...
	I1212 00:29:37.119377  525066 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 00:29:37.119429  525066 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 00:29:37.134288  525066 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 00:29:37.147114  525066 docker.go:218] disabling cri-docker service (if available) ...
	I1212 00:29:37.147210  525066 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 00:29:37.162260  525066 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 00:29:37.175226  525066 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 00:29:37.287755  525066 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 00:29:37.404746  525066 docker.go:234] disabling docker service ...
	I1212 00:29:37.404828  525066 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 00:29:37.419834  525066 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 00:29:37.433027  525066 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 00:29:37.553874  525066 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 00:29:37.677379  525066 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 00:29:37.696856  525066 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 00:29:37.711415  525066 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1212 00:29:37.712568  525066 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1212 00:29:37.712642  525066 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:29:37.724126  525066 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 00:29:37.724197  525066 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:29:37.733568  525066 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:29:37.743368  525066 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:29:37.752442  525066 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 00:29:37.761570  525066 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:29:37.771444  525066 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:29:37.780014  525066 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:29:37.788901  525066 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 00:29:37.795786  525066 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1212 00:29:37.796743  525066 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 00:29:37.804315  525066 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:29:37.916494  525066 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 00:29:38.098236  525066 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 00:29:38.098362  525066 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 00:29:38.102398  525066 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1212 00:29:38.102430  525066 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1212 00:29:38.102438  525066 command_runner.go:130] > Device: 0,72	Inode: 1642        Links: 1
	I1212 00:29:38.102445  525066 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 00:29:38.102451  525066 command_runner.go:130] > Access: 2025-12-12 00:29:38.034542795 +0000
	I1212 00:29:38.102458  525066 command_runner.go:130] > Modify: 2025-12-12 00:29:38.034542795 +0000
	I1212 00:29:38.102463  525066 command_runner.go:130] > Change: 2025-12-12 00:29:38.034542795 +0000
	I1212 00:29:38.102467  525066 command_runner.go:130] >  Birth: -
	I1212 00:29:38.102500  525066 start.go:564] Will wait 60s for crictl version
	I1212 00:29:38.102554  525066 ssh_runner.go:195] Run: which crictl
	I1212 00:29:38.105961  525066 command_runner.go:130] > /usr/local/bin/crictl
	I1212 00:29:38.106209  525066 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 00:29:38.130147  525066 command_runner.go:130] > Version:  0.1.0
	I1212 00:29:38.130215  525066 command_runner.go:130] > RuntimeName:  cri-o
	I1212 00:29:38.130236  525066 command_runner.go:130] > RuntimeVersion:  1.34.3
	I1212 00:29:38.130255  525066 command_runner.go:130] > RuntimeApiVersion:  v1
	I1212 00:29:38.130299  525066 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1212 00:29:38.130400  525066 ssh_runner.go:195] Run: crio --version
	I1212 00:29:38.156955  525066 command_runner.go:130] > crio version 1.34.3
	I1212 00:29:38.157026  525066 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1212 00:29:38.157055  525066 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1212 00:29:38.157075  525066 command_runner.go:130] >    GitTreeState:   dirty
	I1212 00:29:38.157101  525066 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1212 00:29:38.157125  525066 command_runner.go:130] >    GoVersion:      go1.24.6
	I1212 00:29:38.157142  525066 command_runner.go:130] >    Compiler:       gc
	I1212 00:29:38.157162  525066 command_runner.go:130] >    Platform:       linux/arm64
	I1212 00:29:38.157188  525066 command_runner.go:130] >    Linkmode:       static
	I1212 00:29:38.157205  525066 command_runner.go:130] >    BuildTags:
	I1212 00:29:38.157231  525066 command_runner.go:130] >      static
	I1212 00:29:38.157260  525066 command_runner.go:130] >      netgo
	I1212 00:29:38.157278  525066 command_runner.go:130] >      osusergo
	I1212 00:29:38.157296  525066 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1212 00:29:38.157309  525066 command_runner.go:130] >      seccomp
	I1212 00:29:38.157334  525066 command_runner.go:130] >      apparmor
	I1212 00:29:38.157350  525066 command_runner.go:130] >      selinux
	I1212 00:29:38.157366  525066 command_runner.go:130] >    LDFlags:          unknown
	I1212 00:29:38.157384  525066 command_runner.go:130] >    SeccompEnabled:   true
	I1212 00:29:38.157415  525066 command_runner.go:130] >    AppArmorEnabled:  false
	I1212 00:29:38.159818  525066 ssh_runner.go:195] Run: crio --version
	I1212 00:29:38.187365  525066 command_runner.go:130] > crio version 1.34.3
	I1212 00:29:38.187391  525066 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1212 00:29:38.187398  525066 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1212 00:29:38.187403  525066 command_runner.go:130] >    GitTreeState:   dirty
	I1212 00:29:38.187408  525066 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1212 00:29:38.187414  525066 command_runner.go:130] >    GoVersion:      go1.24.6
	I1212 00:29:38.187418  525066 command_runner.go:130] >    Compiler:       gc
	I1212 00:29:38.187438  525066 command_runner.go:130] >    Platform:       linux/arm64
	I1212 00:29:38.187447  525066 command_runner.go:130] >    Linkmode:       static
	I1212 00:29:38.187451  525066 command_runner.go:130] >    BuildTags:
	I1212 00:29:38.187455  525066 command_runner.go:130] >      static
	I1212 00:29:38.187459  525066 command_runner.go:130] >      netgo
	I1212 00:29:38.187463  525066 command_runner.go:130] >      osusergo
	I1212 00:29:38.187468  525066 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1212 00:29:38.187481  525066 command_runner.go:130] >      seccomp
	I1212 00:29:38.187489  525066 command_runner.go:130] >      apparmor
	I1212 00:29:38.187494  525066 command_runner.go:130] >      selinux
	I1212 00:29:38.187502  525066 command_runner.go:130] >    LDFlags:          unknown
	I1212 00:29:38.187507  525066 command_runner.go:130] >    SeccompEnabled:   true
	I1212 00:29:38.187511  525066 command_runner.go:130] >    AppArmorEnabled:  false
	I1212 00:29:38.193058  525066 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1212 00:29:38.195137  525066 cli_runner.go:164] Run: docker network inspect functional-035643 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 00:29:38.211553  525066 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1212 00:29:38.215227  525066 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1212 00:29:38.215507  525066 kubeadm.go:884] updating cluster {Name:functional-035643 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-035643 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 00:29:38.215633  525066 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1212 00:29:38.215688  525066 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:29:38.248801  525066 command_runner.go:130] > {
	I1212 00:29:38.248822  525066 command_runner.go:130] >   "images":  [
	I1212 00:29:38.248827  525066 command_runner.go:130] >     {
	I1212 00:29:38.248837  525066 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1212 00:29:38.248842  525066 command_runner.go:130] >       "repoTags":  [
	I1212 00:29:38.248851  525066 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1212 00:29:38.248855  525066 command_runner.go:130] >       ],
	I1212 00:29:38.248859  525066 command_runner.go:130] >       "repoDigests":  [
	I1212 00:29:38.248869  525066 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1212 00:29:38.248877  525066 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1212 00:29:38.248880  525066 command_runner.go:130] >       ],
	I1212 00:29:38.248885  525066 command_runner.go:130] >       "size":  "111333938",
	I1212 00:29:38.248893  525066 command_runner.go:130] >       "username":  "",
	I1212 00:29:38.248898  525066 command_runner.go:130] >       "pinned":  false
	I1212 00:29:38.248901  525066 command_runner.go:130] >     },
	I1212 00:29:38.248905  525066 command_runner.go:130] >     {
	I1212 00:29:38.248911  525066 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1212 00:29:38.248926  525066 command_runner.go:130] >       "repoTags":  [
	I1212 00:29:38.248931  525066 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1212 00:29:38.248935  525066 command_runner.go:130] >       ],
	I1212 00:29:38.248939  525066 command_runner.go:130] >       "repoDigests":  [
	I1212 00:29:38.248951  525066 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1212 00:29:38.248960  525066 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1212 00:29:38.248967  525066 command_runner.go:130] >       ],
	I1212 00:29:38.248971  525066 command_runner.go:130] >       "size":  "29037500",
	I1212 00:29:38.248975  525066 command_runner.go:130] >       "username":  "",
	I1212 00:29:38.248983  525066 command_runner.go:130] >       "pinned":  false
	I1212 00:29:38.248987  525066 command_runner.go:130] >     },
	I1212 00:29:38.248990  525066 command_runner.go:130] >     {
	I1212 00:29:38.248998  525066 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1212 00:29:38.249004  525066 command_runner.go:130] >       "repoTags":  [
	I1212 00:29:38.249018  525066 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1212 00:29:38.249026  525066 command_runner.go:130] >       ],
	I1212 00:29:38.249036  525066 command_runner.go:130] >       "repoDigests":  [
	I1212 00:29:38.249044  525066 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1212 00:29:38.249058  525066 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1212 00:29:38.249061  525066 command_runner.go:130] >       ],
	I1212 00:29:38.249065  525066 command_runner.go:130] >       "size":  "74491780",
	I1212 00:29:38.249070  525066 command_runner.go:130] >       "username":  "nonroot",
	I1212 00:29:38.249073  525066 command_runner.go:130] >       "pinned":  false
	I1212 00:29:38.249080  525066 command_runner.go:130] >     },
	I1212 00:29:38.249083  525066 command_runner.go:130] >     {
	I1212 00:29:38.249093  525066 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1212 00:29:38.249104  525066 command_runner.go:130] >       "repoTags":  [
	I1212 00:29:38.249109  525066 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1212 00:29:38.249112  525066 command_runner.go:130] >       ],
	I1212 00:29:38.249116  525066 command_runner.go:130] >       "repoDigests":  [
	I1212 00:29:38.249125  525066 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1212 00:29:38.249135  525066 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1212 00:29:38.249139  525066 command_runner.go:130] >       ],
	I1212 00:29:38.249142  525066 command_runner.go:130] >       "size":  "60857170",
	I1212 00:29:38.249146  525066 command_runner.go:130] >       "uid":  {
	I1212 00:29:38.249150  525066 command_runner.go:130] >         "value":  "0"
	I1212 00:29:38.249153  525066 command_runner.go:130] >       },
	I1212 00:29:38.249166  525066 command_runner.go:130] >       "username":  "",
	I1212 00:29:38.249173  525066 command_runner.go:130] >       "pinned":  false
	I1212 00:29:38.249177  525066 command_runner.go:130] >     },
	I1212 00:29:38.249179  525066 command_runner.go:130] >     {
	I1212 00:29:38.249186  525066 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1212 00:29:38.249192  525066 command_runner.go:130] >       "repoTags":  [
	I1212 00:29:38.249197  525066 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1212 00:29:38.249201  525066 command_runner.go:130] >       ],
	I1212 00:29:38.249205  525066 command_runner.go:130] >       "repoDigests":  [
	I1212 00:29:38.249215  525066 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1212 00:29:38.249230  525066 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1212 00:29:38.249234  525066 command_runner.go:130] >       ],
	I1212 00:29:38.249241  525066 command_runner.go:130] >       "size":  "84949999",
	I1212 00:29:38.249245  525066 command_runner.go:130] >       "uid":  {
	I1212 00:29:38.249249  525066 command_runner.go:130] >         "value":  "0"
	I1212 00:29:38.249254  525066 command_runner.go:130] >       },
	I1212 00:29:38.249259  525066 command_runner.go:130] >       "username":  "",
	I1212 00:29:38.249263  525066 command_runner.go:130] >       "pinned":  false
	I1212 00:29:38.249268  525066 command_runner.go:130] >     },
	I1212 00:29:38.249272  525066 command_runner.go:130] >     {
	I1212 00:29:38.249281  525066 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1212 00:29:38.249294  525066 command_runner.go:130] >       "repoTags":  [
	I1212 00:29:38.249301  525066 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1212 00:29:38.249304  525066 command_runner.go:130] >       ],
	I1212 00:29:38.249308  525066 command_runner.go:130] >       "repoDigests":  [
	I1212 00:29:38.249317  525066 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1212 00:29:38.249326  525066 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1212 00:29:38.249337  525066 command_runner.go:130] >       ],
	I1212 00:29:38.249341  525066 command_runner.go:130] >       "size":  "72170325",
	I1212 00:29:38.249345  525066 command_runner.go:130] >       "uid":  {
	I1212 00:29:38.249348  525066 command_runner.go:130] >         "value":  "0"
	I1212 00:29:38.249356  525066 command_runner.go:130] >       },
	I1212 00:29:38.249364  525066 command_runner.go:130] >       "username":  "",
	I1212 00:29:38.249367  525066 command_runner.go:130] >       "pinned":  false
	I1212 00:29:38.249371  525066 command_runner.go:130] >     },
	I1212 00:29:38.249374  525066 command_runner.go:130] >     {
	I1212 00:29:38.249381  525066 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1212 00:29:38.249386  525066 command_runner.go:130] >       "repoTags":  [
	I1212 00:29:38.249391  525066 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1212 00:29:38.249394  525066 command_runner.go:130] >       ],
	I1212 00:29:38.249398  525066 command_runner.go:130] >       "repoDigests":  [
	I1212 00:29:38.249409  525066 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1212 00:29:38.249426  525066 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1212 00:29:38.249434  525066 command_runner.go:130] >       ],
	I1212 00:29:38.249438  525066 command_runner.go:130] >       "size":  "74106775",
	I1212 00:29:38.249450  525066 command_runner.go:130] >       "username":  "",
	I1212 00:29:38.249454  525066 command_runner.go:130] >       "pinned":  false
	I1212 00:29:38.249458  525066 command_runner.go:130] >     },
	I1212 00:29:38.249461  525066 command_runner.go:130] >     {
	I1212 00:29:38.249468  525066 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1212 00:29:38.249472  525066 command_runner.go:130] >       "repoTags":  [
	I1212 00:29:38.249481  525066 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1212 00:29:38.249484  525066 command_runner.go:130] >       ],
	I1212 00:29:38.249488  525066 command_runner.go:130] >       "repoDigests":  [
	I1212 00:29:38.249502  525066 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1212 00:29:38.249522  525066 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1212 00:29:38.249528  525066 command_runner.go:130] >       ],
	I1212 00:29:38.249532  525066 command_runner.go:130] >       "size":  "49822549",
	I1212 00:29:38.249535  525066 command_runner.go:130] >       "uid":  {
	I1212 00:29:38.249539  525066 command_runner.go:130] >         "value":  "0"
	I1212 00:29:38.249549  525066 command_runner.go:130] >       },
	I1212 00:29:38.249553  525066 command_runner.go:130] >       "username":  "",
	I1212 00:29:38.249556  525066 command_runner.go:130] >       "pinned":  false
	I1212 00:29:38.249559  525066 command_runner.go:130] >     },
	I1212 00:29:38.249562  525066 command_runner.go:130] >     {
	I1212 00:29:38.249568  525066 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1212 00:29:38.249572  525066 command_runner.go:130] >       "repoTags":  [
	I1212 00:29:38.249576  525066 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1212 00:29:38.249581  525066 command_runner.go:130] >       ],
	I1212 00:29:38.249586  525066 command_runner.go:130] >       "repoDigests":  [
	I1212 00:29:38.249598  525066 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1212 00:29:38.249606  525066 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1212 00:29:38.249613  525066 command_runner.go:130] >       ],
	I1212 00:29:38.249617  525066 command_runner.go:130] >       "size":  "519884",
	I1212 00:29:38.249621  525066 command_runner.go:130] >       "uid":  {
	I1212 00:29:38.249626  525066 command_runner.go:130] >         "value":  "65535"
	I1212 00:29:38.249633  525066 command_runner.go:130] >       },
	I1212 00:29:38.249642  525066 command_runner.go:130] >       "username":  "",
	I1212 00:29:38.249646  525066 command_runner.go:130] >       "pinned":  true
	I1212 00:29:38.249649  525066 command_runner.go:130] >     }
	I1212 00:29:38.249653  525066 command_runner.go:130] >   ]
	I1212 00:29:38.249656  525066 command_runner.go:130] > }
	I1212 00:29:38.252138  525066 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 00:29:38.252165  525066 crio.go:433] Images already preloaded, skipping extraction
	I1212 00:29:38.252226  525066 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:29:38.276626  525066 command_runner.go:130] > {
	I1212 00:29:38.276647  525066 command_runner.go:130] >   "images":  [
	I1212 00:29:38.276651  525066 command_runner.go:130] >     {
	I1212 00:29:38.276660  525066 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1212 00:29:38.276674  525066 command_runner.go:130] >       "repoTags":  [
	I1212 00:29:38.276681  525066 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1212 00:29:38.276684  525066 command_runner.go:130] >       ],
	I1212 00:29:38.276690  525066 command_runner.go:130] >       "repoDigests":  [
	I1212 00:29:38.276700  525066 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1212 00:29:38.276711  525066 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1212 00:29:38.276717  525066 command_runner.go:130] >       ],
	I1212 00:29:38.276721  525066 command_runner.go:130] >       "size":  "111333938",
	I1212 00:29:38.276725  525066 command_runner.go:130] >       "username":  "",
	I1212 00:29:38.276731  525066 command_runner.go:130] >       "pinned":  false
	I1212 00:29:38.276737  525066 command_runner.go:130] >     },
	I1212 00:29:38.276740  525066 command_runner.go:130] >     {
	I1212 00:29:38.276747  525066 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1212 00:29:38.276754  525066 command_runner.go:130] >       "repoTags":  [
	I1212 00:29:38.276760  525066 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1212 00:29:38.276767  525066 command_runner.go:130] >       ],
	I1212 00:29:38.276771  525066 command_runner.go:130] >       "repoDigests":  [
	I1212 00:29:38.276781  525066 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1212 00:29:38.276790  525066 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1212 00:29:38.276794  525066 command_runner.go:130] >       ],
	I1212 00:29:38.276799  525066 command_runner.go:130] >       "size":  "29037500",
	I1212 00:29:38.276807  525066 command_runner.go:130] >       "username":  "",
	I1212 00:29:38.276815  525066 command_runner.go:130] >       "pinned":  false
	I1212 00:29:38.276822  525066 command_runner.go:130] >     },
	I1212 00:29:38.276826  525066 command_runner.go:130] >     {
	I1212 00:29:38.276833  525066 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1212 00:29:38.276839  525066 command_runner.go:130] >       "repoTags":  [
	I1212 00:29:38.276845  525066 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1212 00:29:38.276850  525066 command_runner.go:130] >       ],
	I1212 00:29:38.276854  525066 command_runner.go:130] >       "repoDigests":  [
	I1212 00:29:38.276868  525066 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1212 00:29:38.276876  525066 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1212 00:29:38.276879  525066 command_runner.go:130] >       ],
	I1212 00:29:38.276883  525066 command_runner.go:130] >       "size":  "74491780",
	I1212 00:29:38.276891  525066 command_runner.go:130] >       "username":  "nonroot",
	I1212 00:29:38.276895  525066 command_runner.go:130] >       "pinned":  false
	I1212 00:29:38.276901  525066 command_runner.go:130] >     },
	I1212 00:29:38.276904  525066 command_runner.go:130] >     {
	I1212 00:29:38.276911  525066 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1212 00:29:38.276918  525066 command_runner.go:130] >       "repoTags":  [
	I1212 00:29:38.276922  525066 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1212 00:29:38.276925  525066 command_runner.go:130] >       ],
	I1212 00:29:38.276930  525066 command_runner.go:130] >       "repoDigests":  [
	I1212 00:29:38.276940  525066 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1212 00:29:38.276951  525066 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1212 00:29:38.276954  525066 command_runner.go:130] >       ],
	I1212 00:29:38.276973  525066 command_runner.go:130] >       "size":  "60857170",
	I1212 00:29:38.276977  525066 command_runner.go:130] >       "uid":  {
	I1212 00:29:38.276980  525066 command_runner.go:130] >         "value":  "0"
	I1212 00:29:38.276983  525066 command_runner.go:130] >       },
	I1212 00:29:38.276994  525066 command_runner.go:130] >       "username":  "",
	I1212 00:29:38.277001  525066 command_runner.go:130] >       "pinned":  false
	I1212 00:29:38.277004  525066 command_runner.go:130] >     },
	I1212 00:29:38.277007  525066 command_runner.go:130] >     {
	I1212 00:29:38.277014  525066 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1212 00:29:38.277019  525066 command_runner.go:130] >       "repoTags":  [
	I1212 00:29:38.277032  525066 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1212 00:29:38.277039  525066 command_runner.go:130] >       ],
	I1212 00:29:38.277043  525066 command_runner.go:130] >       "repoDigests":  [
	I1212 00:29:38.277051  525066 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1212 00:29:38.277066  525066 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1212 00:29:38.277070  525066 command_runner.go:130] >       ],
	I1212 00:29:38.277074  525066 command_runner.go:130] >       "size":  "84949999",
	I1212 00:29:38.277078  525066 command_runner.go:130] >       "uid":  {
	I1212 00:29:38.277086  525066 command_runner.go:130] >         "value":  "0"
	I1212 00:29:38.277089  525066 command_runner.go:130] >       },
	I1212 00:29:38.277093  525066 command_runner.go:130] >       "username":  "",
	I1212 00:29:38.277101  525066 command_runner.go:130] >       "pinned":  false
	I1212 00:29:38.277104  525066 command_runner.go:130] >     },
	I1212 00:29:38.277110  525066 command_runner.go:130] >     {
	I1212 00:29:38.277117  525066 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1212 00:29:38.277123  525066 command_runner.go:130] >       "repoTags":  [
	I1212 00:29:38.277129  525066 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1212 00:29:38.277132  525066 command_runner.go:130] >       ],
	I1212 00:29:38.277136  525066 command_runner.go:130] >       "repoDigests":  [
	I1212 00:29:38.277145  525066 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1212 00:29:38.277157  525066 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1212 00:29:38.277160  525066 command_runner.go:130] >       ],
	I1212 00:29:38.277164  525066 command_runner.go:130] >       "size":  "72170325",
	I1212 00:29:38.277167  525066 command_runner.go:130] >       "uid":  {
	I1212 00:29:38.277171  525066 command_runner.go:130] >         "value":  "0"
	I1212 00:29:38.277175  525066 command_runner.go:130] >       },
	I1212 00:29:38.277181  525066 command_runner.go:130] >       "username":  "",
	I1212 00:29:38.277186  525066 command_runner.go:130] >       "pinned":  false
	I1212 00:29:38.277191  525066 command_runner.go:130] >     },
	I1212 00:29:38.277194  525066 command_runner.go:130] >     {
	I1212 00:29:38.277203  525066 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1212 00:29:38.277209  525066 command_runner.go:130] >       "repoTags":  [
	I1212 00:29:38.277215  525066 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1212 00:29:38.277225  525066 command_runner.go:130] >       ],
	I1212 00:29:38.277229  525066 command_runner.go:130] >       "repoDigests":  [
	I1212 00:29:38.277238  525066 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1212 00:29:38.277251  525066 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1212 00:29:38.277255  525066 command_runner.go:130] >       ],
	I1212 00:29:38.277259  525066 command_runner.go:130] >       "size":  "74106775",
	I1212 00:29:38.277263  525066 command_runner.go:130] >       "username":  "",
	I1212 00:29:38.277269  525066 command_runner.go:130] >       "pinned":  false
	I1212 00:29:38.277273  525066 command_runner.go:130] >     },
	I1212 00:29:38.277276  525066 command_runner.go:130] >     {
	I1212 00:29:38.277283  525066 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1212 00:29:38.277289  525066 command_runner.go:130] >       "repoTags":  [
	I1212 00:29:38.277294  525066 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1212 00:29:38.277297  525066 command_runner.go:130] >       ],
	I1212 00:29:38.277301  525066 command_runner.go:130] >       "repoDigests":  [
	I1212 00:29:38.277309  525066 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1212 00:29:38.277326  525066 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1212 00:29:38.277330  525066 command_runner.go:130] >       ],
	I1212 00:29:38.277334  525066 command_runner.go:130] >       "size":  "49822549",
	I1212 00:29:38.277340  525066 command_runner.go:130] >       "uid":  {
	I1212 00:29:38.277344  525066 command_runner.go:130] >         "value":  "0"
	I1212 00:29:38.277347  525066 command_runner.go:130] >       },
	I1212 00:29:38.277351  525066 command_runner.go:130] >       "username":  "",
	I1212 00:29:38.277357  525066 command_runner.go:130] >       "pinned":  false
	I1212 00:29:38.277360  525066 command_runner.go:130] >     },
	I1212 00:29:38.277364  525066 command_runner.go:130] >     {
	I1212 00:29:38.277373  525066 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1212 00:29:38.277377  525066 command_runner.go:130] >       "repoTags":  [
	I1212 00:29:38.277390  525066 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1212 00:29:38.277394  525066 command_runner.go:130] >       ],
	I1212 00:29:38.277397  525066 command_runner.go:130] >       "repoDigests":  [
	I1212 00:29:38.277405  525066 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1212 00:29:38.277416  525066 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1212 00:29:38.277424  525066 command_runner.go:130] >       ],
	I1212 00:29:38.277429  525066 command_runner.go:130] >       "size":  "519884",
	I1212 00:29:38.277432  525066 command_runner.go:130] >       "uid":  {
	I1212 00:29:38.277438  525066 command_runner.go:130] >         "value":  "65535"
	I1212 00:29:38.277442  525066 command_runner.go:130] >       },
	I1212 00:29:38.277447  525066 command_runner.go:130] >       "username":  "",
	I1212 00:29:38.277453  525066 command_runner.go:130] >       "pinned":  true
	I1212 00:29:38.277456  525066 command_runner.go:130] >     }
	I1212 00:29:38.277459  525066 command_runner.go:130] >   ]
	I1212 00:29:38.277464  525066 command_runner.go:130] > }
	I1212 00:29:38.282583  525066 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 00:29:38.282606  525066 cache_images.go:86] Images are preloaded, skipping loading
	I1212 00:29:38.282613  525066 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1212 00:29:38.282744  525066 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-035643 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-035643 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 00:29:38.282831  525066 ssh_runner.go:195] Run: crio config
	I1212 00:29:38.339065  525066 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1212 00:29:38.339140  525066 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1212 00:29:38.339162  525066 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1212 00:29:38.339180  525066 command_runner.go:130] > #
	I1212 00:29:38.339218  525066 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1212 00:29:38.339243  525066 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1212 00:29:38.339261  525066 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1212 00:29:38.339304  525066 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1212 00:29:38.339327  525066 command_runner.go:130] > # reload'.
	I1212 00:29:38.339346  525066 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1212 00:29:38.339379  525066 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1212 00:29:38.339402  525066 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1212 00:29:38.339422  525066 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1212 00:29:38.339436  525066 command_runner.go:130] > [crio]
	I1212 00:29:38.339466  525066 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1212 00:29:38.339488  525066 command_runner.go:130] > # containers images, in this directory.
	I1212 00:29:38.339510  525066 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1212 00:29:38.339541  525066 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1212 00:29:38.339562  525066 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1212 00:29:38.339583  525066 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1212 00:29:38.339600  525066 command_runner.go:130] > # imagestore = ""
	I1212 00:29:38.339629  525066 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1212 00:29:38.339652  525066 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1212 00:29:38.339676  525066 command_runner.go:130] > # storage_driver = "overlay"
	I1212 00:29:38.339707  525066 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1212 00:29:38.339730  525066 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1212 00:29:38.339746  525066 command_runner.go:130] > # storage_option = [
	I1212 00:29:38.339762  525066 command_runner.go:130] > # ]
	I1212 00:29:38.339794  525066 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1212 00:29:38.339818  525066 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1212 00:29:38.339834  525066 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1212 00:29:38.339852  525066 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1212 00:29:38.339890  525066 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1212 00:29:38.339907  525066 command_runner.go:130] > # always happen on a node reboot
	I1212 00:29:38.339923  525066 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1212 00:29:38.339959  525066 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1212 00:29:38.339984  525066 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1212 00:29:38.340001  525066 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1212 00:29:38.340029  525066 command_runner.go:130] > # version_file_persist = ""
	I1212 00:29:38.340052  525066 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1212 00:29:38.340072  525066 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1212 00:29:38.340087  525066 command_runner.go:130] > # internal_wipe = true
	I1212 00:29:38.340117  525066 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1212 00:29:38.340140  525066 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1212 00:29:38.340157  525066 command_runner.go:130] > # internal_repair = true
	I1212 00:29:38.340175  525066 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1212 00:29:38.340208  525066 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1212 00:29:38.340228  525066 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1212 00:29:38.340246  525066 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1212 00:29:38.340277  525066 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1212 00:29:38.340300  525066 command_runner.go:130] > [crio.api]
	I1212 00:29:38.340319  525066 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1212 00:29:38.340336  525066 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1212 00:29:38.340365  525066 command_runner.go:130] > # IP address on which the stream server will listen.
	I1212 00:29:38.340387  525066 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1212 00:29:38.340407  525066 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1212 00:29:38.340447  525066 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1212 00:29:38.340822  525066 command_runner.go:130] > # stream_port = "0"
	I1212 00:29:38.340835  525066 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1212 00:29:38.341007  525066 command_runner.go:130] > # stream_enable_tls = false
	I1212 00:29:38.341018  525066 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1212 00:29:38.341210  525066 command_runner.go:130] > # stream_idle_timeout = ""
	I1212 00:29:38.341221  525066 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1212 00:29:38.341229  525066 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1212 00:29:38.341233  525066 command_runner.go:130] > # stream_tls_cert = ""
	I1212 00:29:38.341239  525066 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1212 00:29:38.341245  525066 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1212 00:29:38.341249  525066 command_runner.go:130] > # stream_tls_key = ""
	I1212 00:29:38.341255  525066 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1212 00:29:38.341261  525066 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1212 00:29:38.341272  525066 command_runner.go:130] > # automatically pick up the changes.
	I1212 00:29:38.341446  525066 command_runner.go:130] > # stream_tls_ca = ""
	I1212 00:29:38.341475  525066 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1212 00:29:38.341751  525066 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1212 00:29:38.341765  525066 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1212 00:29:38.341770  525066 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1212 00:29:38.341777  525066 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1212 00:29:38.341782  525066 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1212 00:29:38.341786  525066 command_runner.go:130] > [crio.runtime]
	I1212 00:29:38.341792  525066 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1212 00:29:38.341798  525066 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1212 00:29:38.341801  525066 command_runner.go:130] > # "nofile=1024:2048"
	I1212 00:29:38.341807  525066 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1212 00:29:38.341811  525066 command_runner.go:130] > # default_ulimits = [
	I1212 00:29:38.341814  525066 command_runner.go:130] > # ]
	I1212 00:29:38.341821  525066 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1212 00:29:38.341824  525066 command_runner.go:130] > # no_pivot = false
	I1212 00:29:38.341830  525066 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1212 00:29:38.341836  525066 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1212 00:29:38.341841  525066 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1212 00:29:38.341847  525066 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1212 00:29:38.341851  525066 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1212 00:29:38.341858  525066 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1212 00:29:38.342059  525066 command_runner.go:130] > # conmon = ""
	I1212 00:29:38.342069  525066 command_runner.go:130] > # Cgroup setting for conmon
	I1212 00:29:38.342077  525066 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1212 00:29:38.342081  525066 command_runner.go:130] > conmon_cgroup = "pod"
	I1212 00:29:38.342087  525066 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1212 00:29:38.342093  525066 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1212 00:29:38.342100  525066 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1212 00:29:38.342293  525066 command_runner.go:130] > # conmon_env = [
	I1212 00:29:38.342301  525066 command_runner.go:130] > # ]
	I1212 00:29:38.342307  525066 command_runner.go:130] > # Additional environment variables to set for all the
	I1212 00:29:38.342312  525066 command_runner.go:130] > # containers. These are overridden if set in the
	I1212 00:29:38.342318  525066 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1212 00:29:38.342321  525066 command_runner.go:130] > # default_env = [
	I1212 00:29:38.342325  525066 command_runner.go:130] > # ]
	I1212 00:29:38.342330  525066 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1212 00:29:38.342338  525066 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1212 00:29:38.342531  525066 command_runner.go:130] > # selinux = false
	I1212 00:29:38.342542  525066 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1212 00:29:38.342551  525066 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1212 00:29:38.342556  525066 command_runner.go:130] > # This option supports live configuration reload.
	I1212 00:29:38.342765  525066 command_runner.go:130] > # seccomp_profile = ""
	I1212 00:29:38.342777  525066 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1212 00:29:38.342783  525066 command_runner.go:130] > # This option supports live configuration reload.
	I1212 00:29:38.342787  525066 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1212 00:29:38.342804  525066 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1212 00:29:38.342810  525066 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1212 00:29:38.342817  525066 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1212 00:29:38.342823  525066 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1212 00:29:38.342828  525066 command_runner.go:130] > # This option supports live configuration reload.
	I1212 00:29:38.342833  525066 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1212 00:29:38.342838  525066 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1212 00:29:38.342842  525066 command_runner.go:130] > # the cgroup blockio controller.
	I1212 00:29:38.343029  525066 command_runner.go:130] > # blockio_config_file = ""
	I1212 00:29:38.343040  525066 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1212 00:29:38.343044  525066 command_runner.go:130] > # blockio parameters.
	I1212 00:29:38.343244  525066 command_runner.go:130] > # blockio_reload = false
	I1212 00:29:38.343255  525066 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1212 00:29:38.343260  525066 command_runner.go:130] > # irqbalance daemon.
	I1212 00:29:38.343265  525066 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1212 00:29:38.343271  525066 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1212 00:29:38.343278  525066 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1212 00:29:38.343285  525066 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1212 00:29:38.343472  525066 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1212 00:29:38.343488  525066 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1212 00:29:38.343494  525066 command_runner.go:130] > # This option supports live configuration reload.
	I1212 00:29:38.343668  525066 command_runner.go:130] > # rdt_config_file = ""
	I1212 00:29:38.343679  525066 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1212 00:29:38.343683  525066 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1212 00:29:38.343690  525066 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1212 00:29:38.343893  525066 command_runner.go:130] > # separate_pull_cgroup = ""
	I1212 00:29:38.343905  525066 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1212 00:29:38.343912  525066 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1212 00:29:38.343920  525066 command_runner.go:130] > # will be added.
	I1212 00:29:38.343925  525066 command_runner.go:130] > # default_capabilities = [
	I1212 00:29:38.344172  525066 command_runner.go:130] > # 	"CHOWN",
	I1212 00:29:38.344180  525066 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1212 00:29:38.344184  525066 command_runner.go:130] > # 	"FSETID",
	I1212 00:29:38.344187  525066 command_runner.go:130] > # 	"FOWNER",
	I1212 00:29:38.344191  525066 command_runner.go:130] > # 	"SETGID",
	I1212 00:29:38.344194  525066 command_runner.go:130] > # 	"SETUID",
	I1212 00:29:38.344217  525066 command_runner.go:130] > # 	"SETPCAP",
	I1212 00:29:38.344397  525066 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1212 00:29:38.344405  525066 command_runner.go:130] > # 	"KILL",
	I1212 00:29:38.344408  525066 command_runner.go:130] > # ]
	I1212 00:29:38.344417  525066 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1212 00:29:38.344424  525066 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1212 00:29:38.344614  525066 command_runner.go:130] > # add_inheritable_capabilities = false
	I1212 00:29:38.344634  525066 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1212 00:29:38.344641  525066 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1212 00:29:38.344645  525066 command_runner.go:130] > default_sysctls = [
	I1212 00:29:38.344818  525066 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1212 00:29:38.344834  525066 command_runner.go:130] > ]
	I1212 00:29:38.344839  525066 command_runner.go:130] > # List of devices on the host that a
	I1212 00:29:38.344846  525066 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1212 00:29:38.344850  525066 command_runner.go:130] > # allowed_devices = [
	I1212 00:29:38.345064  525066 command_runner.go:130] > # 	"/dev/fuse",
	I1212 00:29:38.345072  525066 command_runner.go:130] > # 	"/dev/net/tun",
	I1212 00:29:38.345076  525066 command_runner.go:130] > # ]
	I1212 00:29:38.345089  525066 command_runner.go:130] > # List of additional devices. specified as
	I1212 00:29:38.345098  525066 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1212 00:29:38.345141  525066 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1212 00:29:38.345151  525066 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1212 00:29:38.345155  525066 command_runner.go:130] > # additional_devices = [
	I1212 00:29:38.345354  525066 command_runner.go:130] > # ]
	I1212 00:29:38.345364  525066 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1212 00:29:38.345368  525066 command_runner.go:130] > # cdi_spec_dirs = [
	I1212 00:29:38.345371  525066 command_runner.go:130] > # 	"/etc/cdi",
	I1212 00:29:38.345585  525066 command_runner.go:130] > # 	"/var/run/cdi",
	I1212 00:29:38.345593  525066 command_runner.go:130] > # ]
	I1212 00:29:38.345600  525066 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1212 00:29:38.345606  525066 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1212 00:29:38.345609  525066 command_runner.go:130] > # Defaults to false.
	I1212 00:29:38.345614  525066 command_runner.go:130] > # device_ownership_from_security_context = false
	I1212 00:29:38.345652  525066 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1212 00:29:38.345661  525066 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1212 00:29:38.345665  525066 command_runner.go:130] > # hooks_dir = [
	I1212 00:29:38.345877  525066 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1212 00:29:38.345885  525066 command_runner.go:130] > # ]
	I1212 00:29:38.345892  525066 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1212 00:29:38.345899  525066 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1212 00:29:38.345904  525066 command_runner.go:130] > # its default mounts from the following two files:
	I1212 00:29:38.345907  525066 command_runner.go:130] > #
	I1212 00:29:38.345914  525066 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1212 00:29:38.345957  525066 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1212 00:29:38.345963  525066 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1212 00:29:38.345966  525066 command_runner.go:130] > #
	I1212 00:29:38.345972  525066 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1212 00:29:38.345979  525066 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1212 00:29:38.345986  525066 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1212 00:29:38.345991  525066 command_runner.go:130] > #      only add mounts it finds in this file.
	I1212 00:29:38.346020  525066 command_runner.go:130] > #
	I1212 00:29:38.346210  525066 command_runner.go:130] > # default_mounts_file = ""
	I1212 00:29:38.346221  525066 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1212 00:29:38.346228  525066 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1212 00:29:38.346444  525066 command_runner.go:130] > # pids_limit = -1
	I1212 00:29:38.346456  525066 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1212 00:29:38.346463  525066 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1212 00:29:38.346469  525066 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1212 00:29:38.346478  525066 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1212 00:29:38.346512  525066 command_runner.go:130] > # log_size_max = -1
	I1212 00:29:38.346523  525066 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1212 00:29:38.346724  525066 command_runner.go:130] > # log_to_journald = false
	I1212 00:29:38.346736  525066 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1212 00:29:38.346742  525066 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1212 00:29:38.346747  525066 command_runner.go:130] > # Path to directory for container attach sockets.
	I1212 00:29:38.347111  525066 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1212 00:29:38.347122  525066 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1212 00:29:38.347127  525066 command_runner.go:130] > # bind_mount_prefix = ""
	I1212 00:29:38.347132  525066 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1212 00:29:38.347136  525066 command_runner.go:130] > # read_only = false
	I1212 00:29:38.347142  525066 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1212 00:29:38.347149  525066 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1212 00:29:38.347186  525066 command_runner.go:130] > # live configuration reload.
	I1212 00:29:38.347359  525066 command_runner.go:130] > # log_level = "info"
	I1212 00:29:38.347376  525066 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1212 00:29:38.347381  525066 command_runner.go:130] > # This option supports live configuration reload.
	I1212 00:29:38.347597  525066 command_runner.go:130] > # log_filter = ""
	I1212 00:29:38.347608  525066 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1212 00:29:38.347615  525066 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1212 00:29:38.347619  525066 command_runner.go:130] > # separated by comma.
	I1212 00:29:38.347671  525066 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1212 00:29:38.347679  525066 command_runner.go:130] > # uid_mappings = ""
	I1212 00:29:38.347686  525066 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1212 00:29:38.347692  525066 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1212 00:29:38.347696  525066 command_runner.go:130] > # separated by comma.
	I1212 00:29:38.347704  525066 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1212 00:29:38.347707  525066 command_runner.go:130] > # gid_mappings = ""
	I1212 00:29:38.347714  525066 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1212 00:29:38.347746  525066 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1212 00:29:38.347757  525066 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1212 00:29:38.347765  525066 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1212 00:29:38.347769  525066 command_runner.go:130] > # minimum_mappable_uid = -1
	I1212 00:29:38.347775  525066 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1212 00:29:38.347781  525066 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1212 00:29:38.347787  525066 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1212 00:29:38.347822  525066 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1212 00:29:38.348158  525066 command_runner.go:130] > # minimum_mappable_gid = -1
	I1212 00:29:38.348170  525066 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1212 00:29:38.348176  525066 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1212 00:29:38.348182  525066 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1212 00:29:38.348415  525066 command_runner.go:130] > # ctr_stop_timeout = 30
	I1212 00:29:38.348427  525066 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1212 00:29:38.348433  525066 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1212 00:29:38.348438  525066 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1212 00:29:38.348442  525066 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1212 00:29:38.348641  525066 command_runner.go:130] > # drop_infra_ctr = true
	I1212 00:29:38.348653  525066 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1212 00:29:38.348659  525066 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1212 00:29:38.348666  525066 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1212 00:29:38.348674  525066 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1212 00:29:38.348712  525066 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1212 00:29:38.348725  525066 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1212 00:29:38.348731  525066 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1212 00:29:38.348736  525066 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1212 00:29:38.348935  525066 command_runner.go:130] > # shared_cpuset = ""
	I1212 00:29:38.348946  525066 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1212 00:29:38.348952  525066 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1212 00:29:38.348956  525066 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1212 00:29:38.348964  525066 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1212 00:29:38.349178  525066 command_runner.go:130] > # pinns_path = ""
	I1212 00:29:38.349189  525066 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1212 00:29:38.349195  525066 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1212 00:29:38.349199  525066 command_runner.go:130] > # enable_criu_support = true
	I1212 00:29:38.349214  525066 command_runner.go:130] > # Enable/disable the generation of the container,
	I1212 00:29:38.349253  525066 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1212 00:29:38.349272  525066 command_runner.go:130] > # enable_pod_events = false
	I1212 00:29:38.349291  525066 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1212 00:29:38.349322  525066 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1212 00:29:38.349505  525066 command_runner.go:130] > # default_runtime = "crun"
	I1212 00:29:38.349536  525066 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1212 00:29:38.349573  525066 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1212 00:29:38.349601  525066 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1212 00:29:38.349618  525066 command_runner.go:130] > # creation as a file is not desired either.
	I1212 00:29:38.349653  525066 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1212 00:29:38.349674  525066 command_runner.go:130] > # the hostname is being managed dynamically.
	I1212 00:29:38.349690  525066 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1212 00:29:38.349956  525066 command_runner.go:130] > # ]
	I1212 00:29:38.350003  525066 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1212 00:29:38.350025  525066 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1212 00:29:38.350043  525066 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1212 00:29:38.350074  525066 command_runner.go:130] > # Each entry in the table should follow the format:
	I1212 00:29:38.350093  525066 command_runner.go:130] > #
	I1212 00:29:38.350110  525066 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1212 00:29:38.350127  525066 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1212 00:29:38.350158  525066 command_runner.go:130] > # runtime_type = "oci"
	I1212 00:29:38.350179  525066 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1212 00:29:38.350201  525066 command_runner.go:130] > # inherit_default_runtime = false
	I1212 00:29:38.350218  525066 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1212 00:29:38.350253  525066 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1212 00:29:38.350271  525066 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1212 00:29:38.350287  525066 command_runner.go:130] > # monitor_env = []
	I1212 00:29:38.350317  525066 command_runner.go:130] > # privileged_without_host_devices = false
	I1212 00:29:38.350339  525066 command_runner.go:130] > # allowed_annotations = []
	I1212 00:29:38.350358  525066 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1212 00:29:38.350372  525066 command_runner.go:130] > # no_sync_log = false
	I1212 00:29:38.350402  525066 command_runner.go:130] > # default_annotations = {}
	I1212 00:29:38.350419  525066 command_runner.go:130] > # stream_websockets = false
	I1212 00:29:38.350436  525066 command_runner.go:130] > # seccomp_profile = ""
	I1212 00:29:38.350499  525066 command_runner.go:130] > # Where:
	I1212 00:29:38.350529  525066 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1212 00:29:38.350561  525066 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1212 00:29:38.350588  525066 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1212 00:29:38.350607  525066 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1212 00:29:38.350635  525066 command_runner.go:130] > #   in $PATH.
	I1212 00:29:38.350670  525066 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1212 00:29:38.350713  525066 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1212 00:29:38.350735  525066 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1212 00:29:38.350750  525066 command_runner.go:130] > #   state.
	I1212 00:29:38.350780  525066 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1212 00:29:38.350928  525066 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1212 00:29:38.351028  525066 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1212 00:29:38.351155  525066 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1212 00:29:38.351251  525066 command_runner.go:130] > #   the values from the default runtime on load time.
	I1212 00:29:38.351344  525066 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1212 00:29:38.351530  525066 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1212 00:29:38.351817  525066 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1212 00:29:38.352119  525066 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1212 00:29:38.352319  525066 command_runner.go:130] > #   The currently recognized values are:
	I1212 00:29:38.352557  525066 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1212 00:29:38.352766  525066 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1212 00:29:38.352929  525066 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1212 00:29:38.353036  525066 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1212 00:29:38.353153  525066 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1212 00:29:38.353519  525066 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1212 00:29:38.353569  525066 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1212 00:29:38.353580  525066 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1212 00:29:38.353587  525066 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1212 00:29:38.353593  525066 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1212 00:29:38.353637  525066 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1212 00:29:38.353645  525066 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1212 00:29:38.353652  525066 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1212 00:29:38.353658  525066 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1212 00:29:38.353664  525066 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1212 00:29:38.353679  525066 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1212 00:29:38.353695  525066 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1212 00:29:38.353699  525066 command_runner.go:130] > #   deprecated option "conmon".
	I1212 00:29:38.353706  525066 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1212 00:29:38.353766  525066 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1212 00:29:38.353805  525066 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1212 00:29:38.353814  525066 command_runner.go:130] > #   should be moved to the container's cgroup
	I1212 00:29:38.353822  525066 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1212 00:29:38.353826  525066 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1212 00:29:38.353834  525066 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1212 00:29:38.353838  525066 command_runner.go:130] > #   conmon-rs by using:
	I1212 00:29:38.353893  525066 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1212 00:29:38.353903  525066 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1212 00:29:38.353947  525066 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1212 00:29:38.353958  525066 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1212 00:29:38.353963  525066 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1212 00:29:38.353971  525066 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1212 00:29:38.353979  525066 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1212 00:29:38.353984  525066 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1212 00:29:38.353992  525066 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1212 00:29:38.354039  525066 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1212 00:29:38.354048  525066 command_runner.go:130] > #   when a machine crash happens.
	I1212 00:29:38.354056  525066 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1212 00:29:38.354064  525066 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1212 00:29:38.354100  525066 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1212 00:29:38.354106  525066 command_runner.go:130] > #   seccomp profile for the runtime.
	I1212 00:29:38.354113  525066 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1212 00:29:38.354120  525066 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1212 00:29:38.354123  525066 command_runner.go:130] > #
	I1212 00:29:38.354169  525066 command_runner.go:130] > # Using the seccomp notifier feature:
	I1212 00:29:38.354175  525066 command_runner.go:130] > #
	I1212 00:29:38.354188  525066 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1212 00:29:38.354195  525066 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1212 00:29:38.354198  525066 command_runner.go:130] > #
	I1212 00:29:38.354204  525066 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1212 00:29:38.354210  525066 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1212 00:29:38.354212  525066 command_runner.go:130] > #
	I1212 00:29:38.354258  525066 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1212 00:29:38.354270  525066 command_runner.go:130] > # feature.
	I1212 00:29:38.354273  525066 command_runner.go:130] > #
	I1212 00:29:38.354279  525066 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1212 00:29:38.354286  525066 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1212 00:29:38.354292  525066 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1212 00:29:38.354298  525066 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1212 00:29:38.354350  525066 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1212 00:29:38.354355  525066 command_runner.go:130] > #
	I1212 00:29:38.354362  525066 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1212 00:29:38.354402  525066 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1212 00:29:38.354408  525066 command_runner.go:130] > #
	I1212 00:29:38.354414  525066 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1212 00:29:38.354420  525066 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1212 00:29:38.354423  525066 command_runner.go:130] > #
	I1212 00:29:38.354429  525066 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1212 00:29:38.354471  525066 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1212 00:29:38.354477  525066 command_runner.go:130] > # limitation.
	I1212 00:29:38.354481  525066 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1212 00:29:38.354485  525066 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1212 00:29:38.354492  525066 command_runner.go:130] > runtime_type = ""
	I1212 00:29:38.354498  525066 command_runner.go:130] > runtime_root = "/run/crun"
	I1212 00:29:38.354502  525066 command_runner.go:130] > inherit_default_runtime = false
	I1212 00:29:38.354538  525066 command_runner.go:130] > runtime_config_path = ""
	I1212 00:29:38.354545  525066 command_runner.go:130] > container_min_memory = ""
	I1212 00:29:38.354550  525066 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1212 00:29:38.354554  525066 command_runner.go:130] > monitor_cgroup = "pod"
	I1212 00:29:38.354558  525066 command_runner.go:130] > monitor_exec_cgroup = ""
	I1212 00:29:38.354561  525066 command_runner.go:130] > allowed_annotations = [
	I1212 00:29:38.354565  525066 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1212 00:29:38.354568  525066 command_runner.go:130] > ]
	I1212 00:29:38.354573  525066 command_runner.go:130] > privileged_without_host_devices = false
	I1212 00:29:38.354577  525066 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1212 00:29:38.354588  525066 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1212 00:29:38.354592  525066 command_runner.go:130] > runtime_type = ""
	I1212 00:29:38.354595  525066 command_runner.go:130] > runtime_root = "/run/runc"
	I1212 00:29:38.354647  525066 command_runner.go:130] > inherit_default_runtime = false
	I1212 00:29:38.354654  525066 command_runner.go:130] > runtime_config_path = ""
	I1212 00:29:38.354659  525066 command_runner.go:130] > container_min_memory = ""
	I1212 00:29:38.354663  525066 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1212 00:29:38.354667  525066 command_runner.go:130] > monitor_cgroup = "pod"
	I1212 00:29:38.354671  525066 command_runner.go:130] > monitor_exec_cgroup = ""
	I1212 00:29:38.354675  525066 command_runner.go:130] > privileged_without_host_devices = false
	I1212 00:29:38.354692  525066 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1212 00:29:38.354700  525066 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1212 00:29:38.354706  525066 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1212 00:29:38.354719  525066 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1212 00:29:38.354731  525066 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1212 00:29:38.354778  525066 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1212 00:29:38.354787  525066 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1212 00:29:38.354793  525066 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1212 00:29:38.354803  525066 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1212 00:29:38.354848  525066 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1212 00:29:38.354862  525066 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1212 00:29:38.354870  525066 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1212 00:29:38.354909  525066 command_runner.go:130] > # Example:
	I1212 00:29:38.354916  525066 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1212 00:29:38.354921  525066 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1212 00:29:38.354929  525066 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1212 00:29:38.354970  525066 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1212 00:29:38.354976  525066 command_runner.go:130] > # cpuset = "0-1"
	I1212 00:29:38.354979  525066 command_runner.go:130] > # cpushares = "5"
	I1212 00:29:38.354982  525066 command_runner.go:130] > # cpuquota = "1000"
	I1212 00:29:38.354986  525066 command_runner.go:130] > # cpuperiod = "100000"
	I1212 00:29:38.354989  525066 command_runner.go:130] > # cpulimit = "35"
	I1212 00:29:38.354992  525066 command_runner.go:130] > # Where:
	I1212 00:29:38.355002  525066 command_runner.go:130] > # The workload name is workload-type.
	I1212 00:29:38.355009  525066 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1212 00:29:38.355015  525066 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1212 00:29:38.355066  525066 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1212 00:29:38.355077  525066 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1212 00:29:38.355083  525066 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1212 00:29:38.355088  525066 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1212 00:29:38.355095  525066 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1212 00:29:38.355099  525066 command_runner.go:130] > # Default value is set to true
	I1212 00:29:38.355467  525066 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1212 00:29:38.355620  525066 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1212 00:29:38.355721  525066 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1212 00:29:38.355871  525066 command_runner.go:130] > # Default value is set to 'false'
	I1212 00:29:38.356033  525066 command_runner.go:130] > # disable_hostport_mapping = false
	I1212 00:29:38.356163  525066 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1212 00:29:38.356284  525066 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1212 00:29:38.356367  525066 command_runner.go:130] > # timezone = ""
	I1212 00:29:38.356485  525066 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1212 00:29:38.356560  525066 command_runner.go:130] > #
	I1212 00:29:38.356636  525066 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1212 00:29:38.356830  525066 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1212 00:29:38.356937  525066 command_runner.go:130] > [crio.image]
	I1212 00:29:38.357065  525066 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1212 00:29:38.357172  525066 command_runner.go:130] > # default_transport = "docker://"
	I1212 00:29:38.357258  525066 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1212 00:29:38.357455  525066 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1212 00:29:38.357729  525066 command_runner.go:130] > # global_auth_file = ""
	I1212 00:29:38.357787  525066 command_runner.go:130] > # The image used to instantiate infra containers.
	I1212 00:29:38.357796  525066 command_runner.go:130] > # This option supports live configuration reload.
	I1212 00:29:38.357801  525066 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1212 00:29:38.357809  525066 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1212 00:29:38.357821  525066 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1212 00:29:38.357827  525066 command_runner.go:130] > # This option supports live configuration reload.
	I1212 00:29:38.357837  525066 command_runner.go:130] > # pause_image_auth_file = ""
	I1212 00:29:38.357843  525066 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1212 00:29:38.357850  525066 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1212 00:29:38.358627  525066 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1212 00:29:38.358638  525066 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1212 00:29:38.358643  525066 command_runner.go:130] > # pause_command = "/pause"
	I1212 00:29:38.358649  525066 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1212 00:29:38.358655  525066 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1212 00:29:38.358662  525066 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1212 00:29:38.358668  525066 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1212 00:29:38.358674  525066 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1212 00:29:38.358693  525066 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1212 00:29:38.358700  525066 command_runner.go:130] > # pinned_images = [
	I1212 00:29:38.358703  525066 command_runner.go:130] > # ]
	I1212 00:29:38.358709  525066 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1212 00:29:38.358716  525066 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1212 00:29:38.358723  525066 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1212 00:29:38.358729  525066 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1212 00:29:38.358734  525066 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1212 00:29:38.358740  525066 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1212 00:29:38.358745  525066 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1212 00:29:38.358752  525066 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1212 00:29:38.358758  525066 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1212 00:29:38.358764  525066 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1212 00:29:38.358771  525066 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1212 00:29:38.358776  525066 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1212 00:29:38.358782  525066 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1212 00:29:38.358788  525066 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1212 00:29:38.358791  525066 command_runner.go:130] > # changing them here.
	I1212 00:29:38.358801  525066 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1212 00:29:38.358805  525066 command_runner.go:130] > # insecure_registries = [
	I1212 00:29:38.358808  525066 command_runner.go:130] > # ]
	I1212 00:29:38.358814  525066 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1212 00:29:38.358828  525066 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1212 00:29:38.358833  525066 command_runner.go:130] > # image_volumes = "mkdir"
	I1212 00:29:38.358838  525066 command_runner.go:130] > # Temporary directory to use for storing big files
	I1212 00:29:38.358842  525066 command_runner.go:130] > # big_files_temporary_dir = ""
	I1212 00:29:38.358848  525066 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1212 00:29:38.358855  525066 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1212 00:29:38.358860  525066 command_runner.go:130] > # auto_reload_registries = false
	I1212 00:29:38.358866  525066 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1212 00:29:38.358874  525066 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1212 00:29:38.358881  525066 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1212 00:29:38.358885  525066 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1212 00:29:38.358889  525066 command_runner.go:130] > # The mode of short name resolution.
	I1212 00:29:38.358896  525066 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1212 00:29:38.358903  525066 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1212 00:29:38.358908  525066 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1212 00:29:38.358913  525066 command_runner.go:130] > # short_name_mode = "enforcing"
	I1212 00:29:38.358919  525066 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1212 00:29:38.358925  525066 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1212 00:29:38.358929  525066 command_runner.go:130] > # oci_artifact_mount_support = true
	I1212 00:29:38.358935  525066 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1212 00:29:38.358938  525066 command_runner.go:130] > # CNI plugins.
	I1212 00:29:38.358941  525066 command_runner.go:130] > [crio.network]
	I1212 00:29:38.358947  525066 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1212 00:29:38.358952  525066 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1212 00:29:38.358956  525066 command_runner.go:130] > # cni_default_network = ""
	I1212 00:29:38.358966  525066 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1212 00:29:38.358970  525066 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1212 00:29:38.358975  525066 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1212 00:29:38.358979  525066 command_runner.go:130] > # plugin_dirs = [
	I1212 00:29:38.358982  525066 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1212 00:29:38.358985  525066 command_runner.go:130] > # ]
	I1212 00:29:38.358989  525066 command_runner.go:130] > # List of included pod metrics.
	I1212 00:29:38.358993  525066 command_runner.go:130] > # included_pod_metrics = [
	I1212 00:29:38.359000  525066 command_runner.go:130] > # ]
	I1212 00:29:38.359005  525066 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1212 00:29:38.359010  525066 command_runner.go:130] > [crio.metrics]
	I1212 00:29:38.359017  525066 command_runner.go:130] > # Globally enable or disable metrics support.
	I1212 00:29:38.359024  525066 command_runner.go:130] > # enable_metrics = false
	I1212 00:29:38.359029  525066 command_runner.go:130] > # Specify enabled metrics collectors.
	I1212 00:29:38.359034  525066 command_runner.go:130] > # Per default all metrics are enabled.
	I1212 00:29:38.359040  525066 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1212 00:29:38.359048  525066 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1212 00:29:38.359054  525066 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1212 00:29:38.359068  525066 command_runner.go:130] > # metrics_collectors = [
	I1212 00:29:38.359072  525066 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1212 00:29:38.359076  525066 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1212 00:29:38.359079  525066 command_runner.go:130] > # 	"containers_oom_total",
	I1212 00:29:38.359083  525066 command_runner.go:130] > # 	"processes_defunct",
	I1212 00:29:38.359087  525066 command_runner.go:130] > # 	"operations_total",
	I1212 00:29:38.359091  525066 command_runner.go:130] > # 	"operations_latency_seconds",
	I1212 00:29:38.359095  525066 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1212 00:29:38.359099  525066 command_runner.go:130] > # 	"operations_errors_total",
	I1212 00:29:38.359103  525066 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1212 00:29:38.359107  525066 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1212 00:29:38.359111  525066 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1212 00:29:38.359115  525066 command_runner.go:130] > # 	"image_pulls_success_total",
	I1212 00:29:38.359119  525066 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1212 00:29:38.359123  525066 command_runner.go:130] > # 	"containers_oom_count_total",
	I1212 00:29:38.359128  525066 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1212 00:29:38.359132  525066 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1212 00:29:38.359137  525066 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1212 00:29:38.359139  525066 command_runner.go:130] > # ]
	I1212 00:29:38.359145  525066 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1212 00:29:38.359149  525066 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1212 00:29:38.359155  525066 command_runner.go:130] > # The port on which the metrics server will listen.
	I1212 00:29:38.359158  525066 command_runner.go:130] > # metrics_port = 9090
	I1212 00:29:38.359167  525066 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1212 00:29:38.359171  525066 command_runner.go:130] > # metrics_socket = ""
	I1212 00:29:38.359176  525066 command_runner.go:130] > # The certificate for the secure metrics server.
	I1212 00:29:38.359182  525066 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1212 00:29:38.359188  525066 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1212 00:29:38.359192  525066 command_runner.go:130] > # certificate on any modification event.
	I1212 00:29:38.359196  525066 command_runner.go:130] > # metrics_cert = ""
	I1212 00:29:38.359201  525066 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1212 00:29:38.359206  525066 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1212 00:29:38.359209  525066 command_runner.go:130] > # metrics_key = ""
	I1212 00:29:38.359214  525066 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1212 00:29:38.359218  525066 command_runner.go:130] > [crio.tracing]
	I1212 00:29:38.359224  525066 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1212 00:29:38.359227  525066 command_runner.go:130] > # enable_tracing = false
	I1212 00:29:38.359233  525066 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1212 00:29:38.359237  525066 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1212 00:29:38.359243  525066 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1212 00:29:38.359249  525066 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1212 00:29:38.359253  525066 command_runner.go:130] > # CRI-O NRI configuration.
	I1212 00:29:38.359256  525066 command_runner.go:130] > [crio.nri]
	I1212 00:29:38.359260  525066 command_runner.go:130] > # Globally enable or disable NRI.
	I1212 00:29:38.359458  525066 command_runner.go:130] > # enable_nri = true
	I1212 00:29:38.359492  525066 command_runner.go:130] > # NRI socket to listen on.
	I1212 00:29:38.359531  525066 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1212 00:29:38.359552  525066 command_runner.go:130] > # NRI plugin directory to use.
	I1212 00:29:38.359571  525066 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1212 00:29:38.359603  525066 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1212 00:29:38.359625  525066 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1212 00:29:38.359646  525066 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1212 00:29:38.359766  525066 command_runner.go:130] > # nri_disable_connections = false
	I1212 00:29:38.359799  525066 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1212 00:29:38.359833  525066 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1212 00:29:38.359860  525066 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1212 00:29:38.359876  525066 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1212 00:29:38.359893  525066 command_runner.go:130] > # NRI default validator configuration.
	I1212 00:29:38.359933  525066 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1212 00:29:38.359959  525066 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1212 00:29:38.359990  525066 command_runner.go:130] > # can be restricted/rejected:
	I1212 00:29:38.360015  525066 command_runner.go:130] > # - OCI hook injection
	I1212 00:29:38.360033  525066 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1212 00:29:38.360064  525066 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1212 00:29:38.360089  525066 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1212 00:29:38.360107  525066 command_runner.go:130] > # - adjustment of linux namespaces
	I1212 00:29:38.360127  525066 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1212 00:29:38.360166  525066 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1212 00:29:38.360186  525066 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1212 00:29:38.360201  525066 command_runner.go:130] > #
	I1212 00:29:38.360237  525066 command_runner.go:130] > # [crio.nri.default_validator]
	I1212 00:29:38.360255  525066 command_runner.go:130] > # nri_enable_default_validator = false
	I1212 00:29:38.360272  525066 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1212 00:29:38.360303  525066 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1212 00:29:38.360330  525066 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1212 00:29:38.360348  525066 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1212 00:29:38.360476  525066 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1212 00:29:38.360648  525066 command_runner.go:130] > # nri_validator_required_plugins = [
	I1212 00:29:38.360681  525066 command_runner.go:130] > # ]
	I1212 00:29:38.360704  525066 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1212 00:29:38.360740  525066 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1212 00:29:38.360764  525066 command_runner.go:130] > [crio.stats]
	I1212 00:29:38.360783  525066 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1212 00:29:38.360814  525066 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1212 00:29:38.360847  525066 command_runner.go:130] > # stats_collection_period = 0
	I1212 00:29:38.360867  525066 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1212 00:29:38.360905  525066 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1212 00:29:38.360921  525066 command_runner.go:130] > # collection_period = 0
	I1212 00:29:38.360984  525066 command_runner.go:130] ! time="2025-12-12T00:29:38.313366715Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1212 00:29:38.361015  525066 command_runner.go:130] ! time="2025-12-12T00:29:38.313641917Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1212 00:29:38.361052  525066 command_runner.go:130] ! time="2025-12-12T00:29:38.313871475Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1212 00:29:38.361075  525066 command_runner.go:130] ! time="2025-12-12T00:29:38.314022397Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1212 00:29:38.361124  525066 command_runner.go:130] ! time="2025-12-12T00:29:38.314372427Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:29:38.361154  525066 command_runner.go:130] ! time="2025-12-12T00:29:38.31485409Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1212 00:29:38.361178  525066 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1212 00:29:38.361311  525066 cni.go:84] Creating CNI manager for ""
	I1212 00:29:38.361353  525066 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 00:29:38.361385  525066 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 00:29:38.361436  525066 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-035643 NodeName:functional-035643 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 00:29:38.361629  525066 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-035643"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 00:29:38.361753  525066 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 00:29:38.369085  525066 command_runner.go:130] > kubeadm
	I1212 00:29:38.369101  525066 command_runner.go:130] > kubectl
	I1212 00:29:38.369105  525066 command_runner.go:130] > kubelet
	I1212 00:29:38.369321  525066 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 00:29:38.369385  525066 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 00:29:38.376829  525066 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1212 00:29:38.389638  525066 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1212 00:29:38.402701  525066 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1212 00:29:38.415693  525066 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1212 00:29:38.420581  525066 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1212 00:29:38.420662  525066 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:29:38.566232  525066 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:29:39.219049  525066 certs.go:69] Setting up /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643 for IP: 192.168.49.2
	I1212 00:29:39.219079  525066 certs.go:195] generating shared ca certs ...
	I1212 00:29:39.219096  525066 certs.go:227] acquiring lock for ca certs: {Name:mk856824cf2126fa3d2975ef18e195b6ab1234f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:29:39.219238  525066 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22101-487723/.minikube/ca.key
	I1212 00:29:39.219285  525066 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22101-487723/.minikube/proxy-client-ca.key
	I1212 00:29:39.219292  525066 certs.go:257] generating profile certs ...
	I1212 00:29:39.219491  525066 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/client.key
	I1212 00:29:39.219603  525066 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/apiserver.key.8a9a2493
	I1212 00:29:39.219699  525066 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/proxy-client.key
	I1212 00:29:39.219742  525066 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 00:29:39.219761  525066 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1212 00:29:39.219773  525066 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 00:29:39.219783  525066 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 00:29:39.219798  525066 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1212 00:29:39.219843  525066 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1212 00:29:39.219860  525066 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1212 00:29:39.219871  525066 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1212 00:29:39.219967  525066 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/490954.pem (1338 bytes)
	W1212 00:29:39.220038  525066 certs.go:480] ignoring /home/jenkins/minikube-integration/22101-487723/.minikube/certs/490954_empty.pem, impossibly tiny 0 bytes
	I1212 00:29:39.220049  525066 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca-key.pem (1679 bytes)
	I1212 00:29:39.220117  525066 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca.pem (1078 bytes)
	I1212 00:29:39.220147  525066 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/cert.pem (1123 bytes)
	I1212 00:29:39.220202  525066 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/key.pem (1679 bytes)
	I1212 00:29:39.220256  525066 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-487723/.minikube/files/etc/ssl/certs/4909542.pem (1708 bytes)
	I1212 00:29:39.220332  525066 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/490954.pem -> /usr/share/ca-certificates/490954.pem
	I1212 00:29:39.220378  525066 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/files/etc/ssl/certs/4909542.pem -> /usr/share/ca-certificates/4909542.pem
	I1212 00:29:39.220396  525066 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:29:39.221003  525066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 00:29:39.242927  525066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 00:29:39.262484  525066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 00:29:39.285732  525066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 00:29:39.303346  525066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1212 00:29:39.320786  525066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 00:29:39.338821  525066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 00:29:39.356806  525066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 00:29:39.374381  525066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/certs/490954.pem --> /usr/share/ca-certificates/490954.pem (1338 bytes)
	I1212 00:29:39.392333  525066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/files/etc/ssl/certs/4909542.pem --> /usr/share/ca-certificates/4909542.pem (1708 bytes)
	I1212 00:29:39.410089  525066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 00:29:39.427383  525066 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 00:29:39.439725  525066 ssh_runner.go:195] Run: openssl version
	I1212 00:29:39.445636  525066 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1212 00:29:39.445982  525066 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/490954.pem
	I1212 00:29:39.453236  525066 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/490954.pem /etc/ssl/certs/490954.pem
	I1212 00:29:39.460672  525066 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/490954.pem
	I1212 00:29:39.464184  525066 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 12 00:21 /usr/share/ca-certificates/490954.pem
	I1212 00:29:39.464289  525066 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 00:21 /usr/share/ca-certificates/490954.pem
	I1212 00:29:39.464344  525066 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/490954.pem
	I1212 00:29:39.505960  525066 command_runner.go:130] > 51391683
	I1212 00:29:39.506560  525066 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 00:29:39.514611  525066 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4909542.pem
	I1212 00:29:39.522360  525066 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4909542.pem /etc/ssl/certs/4909542.pem
	I1212 00:29:39.531109  525066 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4909542.pem
	I1212 00:29:39.534913  525066 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 12 00:21 /usr/share/ca-certificates/4909542.pem
	I1212 00:29:39.535312  525066 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 00:21 /usr/share/ca-certificates/4909542.pem
	I1212 00:29:39.535374  525066 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4909542.pem
	I1212 00:29:39.578207  525066 command_runner.go:130] > 3ec20f2e
	I1212 00:29:39.578374  525066 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 00:29:39.586281  525066 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:29:39.593845  525066 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 00:29:39.601415  525066 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:29:39.605435  525066 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 12 00:11 /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:29:39.605483  525066 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 00:11 /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:29:39.605537  525066 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:29:39.646250  525066 command_runner.go:130] > b5213941
	I1212 00:29:39.646757  525066 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 00:29:39.654391  525066 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 00:29:39.658287  525066 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 00:29:39.658314  525066 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1212 00:29:39.658322  525066 command_runner.go:130] > Device: 259,1	Inode: 2360480     Links: 1
	I1212 00:29:39.658330  525066 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 00:29:39.658336  525066 command_runner.go:130] > Access: 2025-12-12 00:25:30.972268820 +0000
	I1212 00:29:39.658341  525066 command_runner.go:130] > Modify: 2025-12-12 00:21:25.329898534 +0000
	I1212 00:29:39.658346  525066 command_runner.go:130] > Change: 2025-12-12 00:21:25.329898534 +0000
	I1212 00:29:39.658351  525066 command_runner.go:130] >  Birth: 2025-12-12 00:21:25.329898534 +0000
	I1212 00:29:39.658416  525066 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 00:29:39.699997  525066 command_runner.go:130] > Certificate will not expire
	I1212 00:29:39.700109  525066 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 00:29:39.748952  525066 command_runner.go:130] > Certificate will not expire
	I1212 00:29:39.749499  525066 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 00:29:39.797710  525066 command_runner.go:130] > Certificate will not expire
	I1212 00:29:39.798154  525066 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 00:29:39.843103  525066 command_runner.go:130] > Certificate will not expire
	I1212 00:29:39.843601  525066 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 00:29:39.887374  525066 command_runner.go:130] > Certificate will not expire
	I1212 00:29:39.887871  525066 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 00:29:39.942362  525066 command_runner.go:130] > Certificate will not expire
	I1212 00:29:39.942946  525066 kubeadm.go:401] StartCluster: {Name:functional-035643 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-035643 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:29:39.943046  525066 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 00:29:39.943208  525066 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 00:29:39.985575  525066 cri.go:89] found id: ""
	I1212 00:29:39.985700  525066 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 00:29:39.993609  525066 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1212 00:29:39.993681  525066 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1212 00:29:39.993702  525066 command_runner.go:130] > /var/lib/minikube/etcd:
	I1212 00:29:39.994895  525066 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1212 00:29:39.994945  525066 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1212 00:29:39.995038  525066 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 00:29:40.006978  525066 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:29:40.007554  525066 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-035643" does not appear in /home/jenkins/minikube-integration/22101-487723/kubeconfig
	I1212 00:29:40.007785  525066 kubeconfig.go:62] /home/jenkins/minikube-integration/22101-487723/kubeconfig needs updating (will repair): [kubeconfig missing "functional-035643" cluster setting kubeconfig missing "functional-035643" context setting]
	I1212 00:29:40.008175  525066 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-487723/kubeconfig: {Name:mk40d877648a1b47389942ad828ec218ac64f642 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:29:40.008787  525066 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22101-487723/kubeconfig
	I1212 00:29:40.009179  525066 kapi.go:59] client config for functional-035643: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/client.crt", KeyFile:"/home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/client.key", CAFile:"/home/jenkins/minikube-integration/22101-487723/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 00:29:40.009975  525066 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1212 00:29:40.010118  525066 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1212 00:29:40.010148  525066 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1212 00:29:40.010168  525066 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1212 00:29:40.010204  525066 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1212 00:29:40.010223  525066 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1212 00:29:40.010646  525066 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 00:29:40.025803  525066 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1212 00:29:40.025893  525066 kubeadm.go:602] duration metric: took 30.929693ms to restartPrimaryControlPlane
	I1212 00:29:40.025918  525066 kubeadm.go:403] duration metric: took 82.978705ms to StartCluster
	I1212 00:29:40.025961  525066 settings.go:142] acquiring lock: {Name:mk274c10b2238dc32d72b68ac2e1ec517b8a72b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:29:40.026057  525066 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22101-487723/kubeconfig
	I1212 00:29:40.026847  525066 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-487723/kubeconfig: {Name:mk40d877648a1b47389942ad828ec218ac64f642 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:29:40.027182  525066 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 00:29:40.027614  525066 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 00:29:40.027718  525066 addons.go:70] Setting storage-provisioner=true in profile "functional-035643"
	I1212 00:29:40.027733  525066 addons.go:239] Setting addon storage-provisioner=true in "functional-035643"
	I1212 00:29:40.027759  525066 host.go:66] Checking if "functional-035643" exists ...
	I1212 00:29:40.027683  525066 config.go:182] Loaded profile config "functional-035643": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 00:29:40.027963  525066 addons.go:70] Setting default-storageclass=true in profile "functional-035643"
	I1212 00:29:40.028014  525066 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-035643"
	I1212 00:29:40.028265  525066 cli_runner.go:164] Run: docker container inspect functional-035643 --format={{.State.Status}}
	I1212 00:29:40.028431  525066 cli_runner.go:164] Run: docker container inspect functional-035643 --format={{.State.Status}}
	I1212 00:29:40.031408  525066 out.go:179] * Verifying Kubernetes components...
	I1212 00:29:40.035144  525066 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:29:40.072983  525066 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22101-487723/kubeconfig
	I1212 00:29:40.073191  525066 kapi.go:59] client config for functional-035643: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/client.crt", KeyFile:"/home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/client.key", CAFile:"/home/jenkins/minikube-integration/22101-487723/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 00:29:40.073564  525066 addons.go:239] Setting addon default-storageclass=true in "functional-035643"
	I1212 00:29:40.073635  525066 host.go:66] Checking if "functional-035643" exists ...
	I1212 00:29:40.074143  525066 cli_runner.go:164] Run: docker container inspect functional-035643 --format={{.State.Status}}
	I1212 00:29:40.079735  525066 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 00:29:40.083203  525066 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:29:40.083224  525066 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 00:29:40.083308  525066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
	I1212 00:29:40.126926  525066 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 00:29:40.126953  525066 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 00:29:40.127024  525066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
	I1212 00:29:40.157562  525066 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/functional-035643/id_rsa Username:docker}
	I1212 00:29:40.176759  525066 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/functional-035643/id_rsa Username:docker}
	I1212 00:29:40.228329  525066 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:29:40.297459  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:29:40.324896  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:29:40.970121  525066 node_ready.go:35] waiting up to 6m0s for node "functional-035643" to be "Ready" ...
	I1212 00:29:40.970322  525066 type.go:168] "Request Body" body=""
	I1212 00:29:40.970407  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:40.970561  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:29:40.970616  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:40.970718  525066 retry.go:31] will retry after 204.18222ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:40.970890  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:29:40.970976  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:40.971113  525066 retry.go:31] will retry after 159.994769ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:40.971100  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:41.131658  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:29:41.175423  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:29:41.193550  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:29:41.193607  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:41.193625  525066 retry.go:31] will retry after 255.861028ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:41.245543  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:29:41.245583  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:41.245622  525066 retry.go:31] will retry after 363.545377ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:41.449762  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:29:41.471214  525066 type.go:168] "Request Body" body=""
	I1212 00:29:41.471319  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:41.471599  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:41.515695  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:29:41.515762  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:41.515785  525066 retry.go:31] will retry after 558.343872ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:41.610204  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:29:41.681946  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:29:41.682005  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:41.682029  525066 retry.go:31] will retry after 553.13192ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:41.971401  525066 type.go:168] "Request Body" body=""
	I1212 00:29:41.971545  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:41.971960  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:42.075338  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:29:42.153789  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:29:42.153831  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:42.153875  525066 retry.go:31] will retry after 562.779161ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:42.238244  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:29:42.309134  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:29:42.309235  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:42.309278  525066 retry.go:31] will retry after 839.848798ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:42.470350  525066 type.go:168] "Request Body" body=""
	I1212 00:29:42.470438  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:42.470717  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:42.717299  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:29:42.779260  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:29:42.779300  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:42.779319  525066 retry.go:31] will retry after 1.384955704s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:42.970802  525066 type.go:168] "Request Body" body=""
	I1212 00:29:42.970878  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:42.971167  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:29:42.971212  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:29:43.149494  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:29:43.213920  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:29:43.218125  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:43.218200  525066 retry.go:31] will retry after 1.154245365s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:43.470517  525066 type.go:168] "Request Body" body=""
	I1212 00:29:43.470604  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:43.470922  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:43.970580  525066 type.go:168] "Request Body" body=""
	I1212 00:29:43.970743  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:43.971073  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:44.165470  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:29:44.225816  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:29:44.225880  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:44.225901  525066 retry.go:31] will retry after 2.063043455s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:44.373318  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:29:44.437999  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:29:44.441831  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:44.441865  525066 retry.go:31] will retry after 1.856604218s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:44.471071  525066 type.go:168] "Request Body" body=""
	I1212 00:29:44.471144  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:44.471437  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:44.971289  525066 type.go:168] "Request Body" body=""
	I1212 00:29:44.971379  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:44.971730  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:29:44.971780  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:29:45.470482  525066 type.go:168] "Request Body" body=""
	I1212 00:29:45.470622  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:45.470959  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:45.970491  525066 type.go:168] "Request Body" body=""
	I1212 00:29:45.970565  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:45.970940  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:46.289221  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:29:46.298644  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:29:46.387298  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:29:46.387341  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:46.387359  525066 retry.go:31] will retry after 2.162137781s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:46.389923  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:29:46.389964  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:46.389984  525066 retry.go:31] will retry after 2.885458194s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:46.471167  525066 type.go:168] "Request Body" body=""
	I1212 00:29:46.471247  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:46.471565  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:46.971278  525066 type.go:168] "Request Body" body=""
	I1212 00:29:46.971393  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:46.971713  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:29:46.971800  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:29:47.471406  525066 type.go:168] "Request Body" body=""
	I1212 00:29:47.471481  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:47.471794  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:47.970503  525066 type.go:168] "Request Body" body=""
	I1212 00:29:47.970590  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:47.970978  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:48.470459  525066 type.go:168] "Request Body" body=""
	I1212 00:29:48.470563  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:48.470882  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:48.550228  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:29:48.609468  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:29:48.609564  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:48.609586  525066 retry.go:31] will retry after 5.142469671s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:48.970999  525066 type.go:168] "Request Body" body=""
	I1212 00:29:48.971081  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:48.971378  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:49.275822  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:29:49.338921  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:29:49.338964  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:49.338982  525066 retry.go:31] will retry after 3.130992497s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:49.471334  525066 type.go:168] "Request Body" body=""
	I1212 00:29:49.471407  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:49.471715  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:29:49.471774  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:29:49.970357  525066 type.go:168] "Request Body" body=""
	I1212 00:29:49.970428  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:49.970800  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:50.470449  525066 type.go:168] "Request Body" body=""
	I1212 00:29:50.470521  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:50.470885  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:50.970632  525066 type.go:168] "Request Body" body=""
	I1212 00:29:50.970736  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:50.971135  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:51.470850  525066 type.go:168] "Request Body" body=""
	I1212 00:29:51.470934  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:51.471301  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:51.971160  525066 type.go:168] "Request Body" body=""
	I1212 00:29:51.971232  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:51.971562  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:29:51.971629  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:29:52.470175  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:29:52.470342  525066 type.go:168] "Request Body" body=""
	I1212 00:29:52.470395  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:52.470704  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:52.525865  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:29:52.529169  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:52.529199  525066 retry.go:31] will retry after 5.202817608s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:52.970512  525066 type.go:168] "Request Body" body=""
	I1212 00:29:52.970577  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:52.970929  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:53.470488  525066 type.go:168] "Request Body" body=""
	I1212 00:29:53.470560  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:53.470915  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:53.752286  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:29:53.818071  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:29:53.818120  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:53.818138  525066 retry.go:31] will retry after 7.493688168s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:53.970432  525066 type.go:168] "Request Body" body=""
	I1212 00:29:53.970529  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:53.970820  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:54.470420  525066 type.go:168] "Request Body" body=""
	I1212 00:29:54.470487  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:54.470795  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:29:54.470851  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:29:54.970811  525066 type.go:168] "Request Body" body=""
	I1212 00:29:54.970890  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:54.971241  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:55.471081  525066 type.go:168] "Request Body" body=""
	I1212 00:29:55.471155  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:55.471463  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:55.971189  525066 type.go:168] "Request Body" body=""
	I1212 00:29:55.971259  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:55.971627  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:56.470367  525066 type.go:168] "Request Body" body=""
	I1212 00:29:56.470446  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:56.470766  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:56.970401  525066 type.go:168] "Request Body" body=""
	I1212 00:29:56.970473  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:56.970813  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:29:56.970885  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:29:57.470373  525066 type.go:168] "Request Body" body=""
	I1212 00:29:57.470446  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:57.470716  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:57.732201  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:29:57.788085  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:29:57.792139  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:57.792170  525066 retry.go:31] will retry after 6.658571386s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:57.970423  525066 type.go:168] "Request Body" body=""
	I1212 00:29:57.970495  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:57.970833  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:58.470545  525066 type.go:168] "Request Body" body=""
	I1212 00:29:58.470620  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:58.470971  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:58.970653  525066 type.go:168] "Request Body" body=""
	I1212 00:29:58.970748  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:58.971004  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:29:58.971063  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:29:59.470479  525066 type.go:168] "Request Body" body=""
	I1212 00:29:59.470553  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:59.470886  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:59.970903  525066 type.go:168] "Request Body" body=""
	I1212 00:29:59.970985  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:59.971299  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:00.470879  525066 type.go:168] "Request Body" body=""
	I1212 00:30:00.470978  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:00.471351  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:00.971213  525066 type.go:168] "Request Body" body=""
	I1212 00:30:00.971307  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:00.971736  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:00.971826  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:01.312112  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:30:01.378306  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:30:01.384542  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:30:01.384581  525066 retry.go:31] will retry after 9.383564416s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:30:01.470976  525066 type.go:168] "Request Body" body=""
	I1212 00:30:01.471119  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:01.471452  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:01.971252  525066 type.go:168] "Request Body" body=""
	I1212 00:30:01.971351  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:01.971665  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:02.470427  525066 type.go:168] "Request Body" body=""
	I1212 00:30:02.470507  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:02.470916  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:02.970616  525066 type.go:168] "Request Body" body=""
	I1212 00:30:02.970721  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:02.971066  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:03.470621  525066 type.go:168] "Request Body" body=""
	I1212 00:30:03.470716  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:03.470992  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:03.471037  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:03.970767  525066 type.go:168] "Request Body" body=""
	I1212 00:30:03.970845  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:03.971214  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:04.450915  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:30:04.471249  525066 type.go:168] "Request Body" body=""
	I1212 00:30:04.471318  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:04.471581  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:04.504992  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:30:04.508551  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:30:04.508584  525066 retry.go:31] will retry after 16.635241248s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:30:04.971271  525066 type.go:168] "Request Body" body=""
	I1212 00:30:04.971364  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:04.971628  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:05.470444  525066 type.go:168] "Request Body" body=""
	I1212 00:30:05.470516  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:05.470888  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:05.970469  525066 type.go:168] "Request Body" body=""
	I1212 00:30:05.970547  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:05.970907  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:05.970959  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:06.470384  525066 type.go:168] "Request Body" body=""
	I1212 00:30:06.470477  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:06.470800  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:06.970490  525066 type.go:168] "Request Body" body=""
	I1212 00:30:06.970569  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:06.970938  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:07.470503  525066 type.go:168] "Request Body" body=""
	I1212 00:30:07.470599  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:07.470930  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:07.970442  525066 type.go:168] "Request Body" body=""
	I1212 00:30:07.970519  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:07.970789  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:08.470442  525066 type.go:168] "Request Body" body=""
	I1212 00:30:08.470518  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:08.470850  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:08.470905  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:08.970456  525066 type.go:168] "Request Body" body=""
	I1212 00:30:08.970529  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:08.970891  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:09.470554  525066 type.go:168] "Request Body" body=""
	I1212 00:30:09.470632  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:09.470900  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:09.970929  525066 type.go:168] "Request Body" body=""
	I1212 00:30:09.971012  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:09.971327  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:10.470376  525066 type.go:168] "Request Body" body=""
	I1212 00:30:10.470457  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:10.470750  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:10.768281  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:30:10.825103  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:30:10.828984  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:30:10.829014  525066 retry.go:31] will retry after 8.149625317s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:30:10.971311  525066 type.go:168] "Request Body" body=""
	I1212 00:30:10.971379  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:10.971644  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:10.971683  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:11.470436  525066 type.go:168] "Request Body" body=""
	I1212 00:30:11.470511  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:11.470843  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:11.970432  525066 type.go:168] "Request Body" body=""
	I1212 00:30:11.970527  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:11.970846  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:12.470525  525066 type.go:168] "Request Body" body=""
	I1212 00:30:12.470603  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:12.470941  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:12.970791  525066 type.go:168] "Request Body" body=""
	I1212 00:30:12.970866  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:12.971205  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:13.470475  525066 type.go:168] "Request Body" body=""
	I1212 00:30:13.470560  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:13.470875  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:13.470931  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:13.970549  525066 type.go:168] "Request Body" body=""
	I1212 00:30:13.970621  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:13.970905  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:14.470469  525066 type.go:168] "Request Body" body=""
	I1212 00:30:14.470545  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:14.470911  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:14.970901  525066 type.go:168] "Request Body" body=""
	I1212 00:30:14.970980  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:14.971358  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:15.471006  525066 type.go:168] "Request Body" body=""
	I1212 00:30:15.471085  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:15.471350  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:15.471390  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:15.971171  525066 type.go:168] "Request Body" body=""
	I1212 00:30:15.971259  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:15.971595  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:16.471255  525066 type.go:168] "Request Body" body=""
	I1212 00:30:16.471330  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:16.471636  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:16.970382  525066 type.go:168] "Request Body" body=""
	I1212 00:30:16.970466  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:16.970768  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:17.470446  525066 type.go:168] "Request Body" body=""
	I1212 00:30:17.470517  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:17.470833  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:17.970436  525066 type.go:168] "Request Body" body=""
	I1212 00:30:17.970514  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:17.970839  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:17.970896  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:18.470516  525066 type.go:168] "Request Body" body=""
	I1212 00:30:18.470594  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:18.470885  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:18.970641  525066 type.go:168] "Request Body" body=""
	I1212 00:30:18.970763  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:18.971104  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:18.979423  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:30:19.044083  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:30:19.044119  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:30:19.044140  525066 retry.go:31] will retry after 30.537522265s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:30:19.470570  525066 type.go:168] "Request Body" body=""
	I1212 00:30:19.470653  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:19.471007  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:19.971048  525066 type.go:168] "Request Body" body=""
	I1212 00:30:19.971122  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:19.971389  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:19.971439  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:20.470412  525066 type.go:168] "Request Body" body=""
	I1212 00:30:20.470491  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:20.470835  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:20.970464  525066 type.go:168] "Request Body" body=""
	I1212 00:30:20.970544  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:20.970890  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:21.144446  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:30:21.207915  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:30:21.207964  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:30:21.207983  525066 retry.go:31] will retry after 20.295589284s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:30:21.471340  525066 type.go:168] "Request Body" body=""
	I1212 00:30:21.471410  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:21.471696  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:21.970430  525066 type.go:168] "Request Body" body=""
	I1212 00:30:21.970500  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:21.970808  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:22.470556  525066 type.go:168] "Request Body" body=""
	I1212 00:30:22.470633  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:22.470953  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:22.471006  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:22.970442  525066 type.go:168] "Request Body" body=""
	I1212 00:30:22.970508  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:22.970782  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:23.470501  525066 type.go:168] "Request Body" body=""
	I1212 00:30:23.470601  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:23.470922  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:23.970478  525066 type.go:168] "Request Body" body=""
	I1212 00:30:23.970553  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:23.970885  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:24.470551  525066 type.go:168] "Request Body" body=""
	I1212 00:30:24.470618  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:24.470920  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:24.971014  525066 type.go:168] "Request Body" body=""
	I1212 00:30:24.971090  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:24.971391  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:24.971444  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:25.471210  525066 type.go:168] "Request Body" body=""
	I1212 00:30:25.471284  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:25.471604  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:25.971349  525066 type.go:168] "Request Body" body=""
	I1212 00:30:25.971417  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:25.971673  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:26.470375  525066 type.go:168] "Request Body" body=""
	I1212 00:30:26.470450  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:26.470752  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:26.970468  525066 type.go:168] "Request Body" body=""
	I1212 00:30:26.970568  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:26.970900  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:27.470551  525066 type.go:168] "Request Body" body=""
	I1212 00:30:27.470632  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:27.470951  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:27.471009  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:27.970461  525066 type.go:168] "Request Body" body=""
	I1212 00:30:27.970535  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:27.970896  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:28.470447  525066 type.go:168] "Request Body" body=""
	I1212 00:30:28.470517  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:28.470841  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:28.970537  525066 type.go:168] "Request Body" body=""
	I1212 00:30:28.970615  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:28.970891  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:29.470450  525066 type.go:168] "Request Body" body=""
	I1212 00:30:29.470530  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:29.470908  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:29.970898  525066 type.go:168] "Request Body" body=""
	I1212 00:30:29.970970  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:29.971305  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:29.971361  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:30.470857  525066 type.go:168] "Request Body" body=""
	I1212 00:30:30.470924  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:30.471192  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:30.971054  525066 type.go:168] "Request Body" body=""
	I1212 00:30:30.971147  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:30.971476  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:31.471280  525066 type.go:168] "Request Body" body=""
	I1212 00:30:31.471354  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:31.471652  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:31.970396  525066 type.go:168] "Request Body" body=""
	I1212 00:30:31.970469  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:31.970748  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:32.470446  525066 type.go:168] "Request Body" body=""
	I1212 00:30:32.470524  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:32.470875  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:32.470929  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:32.970467  525066 type.go:168] "Request Body" body=""
	I1212 00:30:32.970548  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:32.970910  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:33.470548  525066 type.go:168] "Request Body" body=""
	I1212 00:30:33.470621  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:33.470958  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:33.970663  525066 type.go:168] "Request Body" body=""
	I1212 00:30:33.970768  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:33.971120  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:34.470641  525066 type.go:168] "Request Body" body=""
	I1212 00:30:34.470734  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:34.471055  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:34.471106  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:34.971029  525066 type.go:168] "Request Body" body=""
	I1212 00:30:34.971106  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:34.971362  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:35.471168  525066 type.go:168] "Request Body" body=""
	I1212 00:30:35.471237  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:35.471543  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:35.971213  525066 type.go:168] "Request Body" body=""
	I1212 00:30:35.971284  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:35.971613  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:36.471350  525066 type.go:168] "Request Body" body=""
	I1212 00:30:36.471428  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:36.471693  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:36.471739  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:36.970449  525066 type.go:168] "Request Body" body=""
	I1212 00:30:36.970521  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:36.970836  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:37.470446  525066 type.go:168] "Request Body" body=""
	I1212 00:30:37.470518  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:37.470867  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:37.970336  525066 type.go:168] "Request Body" body=""
	I1212 00:30:37.970408  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:37.970717  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:38.470440  525066 type.go:168] "Request Body" body=""
	I1212 00:30:38.470510  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:38.470841  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:38.970461  525066 type.go:168] "Request Body" body=""
	I1212 00:30:38.970541  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:38.970920  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:38.970990  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:39.470649  525066 type.go:168] "Request Body" body=""
	I1212 00:30:39.470739  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:39.471073  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:39.970912  525066 type.go:168] "Request Body" body=""
	I1212 00:30:39.970992  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:39.971332  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:40.471276  525066 type.go:168] "Request Body" body=""
	I1212 00:30:40.471354  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:40.471676  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:40.970403  525066 type.go:168] "Request Body" body=""
	I1212 00:30:40.970481  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:40.970820  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:41.470514  525066 type.go:168] "Request Body" body=""
	I1212 00:30:41.470595  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:41.470937  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:41.471004  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:41.504392  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:30:41.561180  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:30:41.564784  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:30:41.564819  525066 retry.go:31] will retry after 29.925155821s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:30:41.971369  525066 type.go:168] "Request Body" body=""
	I1212 00:30:41.971443  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:41.971817  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:42.470482  525066 type.go:168] "Request Body" body=""
	I1212 00:30:42.470569  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:42.470884  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:42.970674  525066 type.go:168] "Request Body" body=""
	I1212 00:30:42.970766  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:42.971092  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:43.470816  525066 type.go:168] "Request Body" body=""
	I1212 00:30:43.470887  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:43.471196  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:43.471261  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:43.970994  525066 type.go:168] "Request Body" body=""
	I1212 00:30:43.971095  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:43.971420  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:44.471076  525066 type.go:168] "Request Body" body=""
	I1212 00:30:44.471150  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:44.471470  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:44.971260  525066 type.go:168] "Request Body" body=""
	I1212 00:30:44.971332  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:44.971645  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:45.470349  525066 type.go:168] "Request Body" body=""
	I1212 00:30:45.470425  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:45.470820  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:45.970433  525066 type.go:168] "Request Body" body=""
	I1212 00:30:45.970513  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:45.970829  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:45.970886  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:46.470441  525066 type.go:168] "Request Body" body=""
	I1212 00:30:46.470539  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:46.470878  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:46.970390  525066 type.go:168] "Request Body" body=""
	I1212 00:30:46.970456  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:46.970764  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:47.470436  525066 type.go:168] "Request Body" body=""
	I1212 00:30:47.470515  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:47.470849  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:47.970601  525066 type.go:168] "Request Body" body=""
	I1212 00:30:47.970697  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:47.970992  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:47.971047  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:48.470367  525066 type.go:168] "Request Body" body=""
	I1212 00:30:48.470433  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:48.470671  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:48.970405  525066 type.go:168] "Request Body" body=""
	I1212 00:30:48.970490  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:48.970853  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:49.470483  525066 type.go:168] "Request Body" body=""
	I1212 00:30:49.470560  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:49.470858  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:49.582168  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:30:49.635241  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:30:49.638539  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:30:49.638564  525066 retry.go:31] will retry after 36.706436998s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:30:49.971245  525066 type.go:168] "Request Body" body=""
	I1212 00:30:49.971317  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:49.971579  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:49.971624  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:50.470498  525066 type.go:168] "Request Body" body=""
	I1212 00:30:50.470570  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:50.470924  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:50.970508  525066 type.go:168] "Request Body" body=""
	I1212 00:30:50.970583  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:50.970916  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:51.470404  525066 type.go:168] "Request Body" body=""
	I1212 00:30:51.470472  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:51.470752  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:51.970450  525066 type.go:168] "Request Body" body=""
	I1212 00:30:51.970531  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:51.970886  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:52.470591  525066 type.go:168] "Request Body" body=""
	I1212 00:30:52.470671  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:52.470990  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:52.471051  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:52.970388  525066 type.go:168] "Request Body" body=""
	I1212 00:30:52.970466  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:52.970738  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:53.470464  525066 type.go:168] "Request Body" body=""
	I1212 00:30:53.470534  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:53.470877  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:53.970467  525066 type.go:168] "Request Body" body=""
	I1212 00:30:53.970540  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:53.970875  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:54.470561  525066 type.go:168] "Request Body" body=""
	I1212 00:30:54.470625  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:54.470882  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:54.970766  525066 type.go:168] "Request Body" body=""
	I1212 00:30:54.970842  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:54.971159  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:54.971210  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:55.470744  525066 type.go:168] "Request Body" body=""
	I1212 00:30:55.470816  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:55.471136  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:55.970919  525066 type.go:168] "Request Body" body=""
	I1212 00:30:55.970989  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:55.971245  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:56.471021  525066 type.go:168] "Request Body" body=""
	I1212 00:30:56.471102  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:56.471459  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:56.971297  525066 type.go:168] "Request Body" body=""
	I1212 00:30:56.971380  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:56.971721  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:56.971777  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:57.470392  525066 type.go:168] "Request Body" body=""
	I1212 00:30:57.470466  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:57.470735  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:57.970426  525066 type.go:168] "Request Body" body=""
	I1212 00:30:57.970498  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:57.970846  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:58.470550  525066 type.go:168] "Request Body" body=""
	I1212 00:30:58.470627  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:58.470983  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:58.970661  525066 type.go:168] "Request Body" body=""
	I1212 00:30:58.970747  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:58.971040  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:59.470746  525066 type.go:168] "Request Body" body=""
	I1212 00:30:59.470824  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:59.471166  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:59.471220  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:59.970964  525066 type.go:168] "Request Body" body=""
	I1212 00:30:59.971041  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:59.971352  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:00.470404  525066 type.go:168] "Request Body" body=""
	I1212 00:31:00.470477  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:00.470773  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:00.970466  525066 type.go:168] "Request Body" body=""
	I1212 00:31:00.970543  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:00.970928  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:01.470645  525066 type.go:168] "Request Body" body=""
	I1212 00:31:01.470749  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:01.471096  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:01.970787  525066 type.go:168] "Request Body" body=""
	I1212 00:31:01.970856  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:01.971135  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:01.971178  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:02.470982  525066 type.go:168] "Request Body" body=""
	I1212 00:31:02.471077  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:02.471408  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:02.971193  525066 type.go:168] "Request Body" body=""
	I1212 00:31:02.971269  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:02.971592  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:03.471340  525066 type.go:168] "Request Body" body=""
	I1212 00:31:03.471407  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:03.471649  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:03.970329  525066 type.go:168] "Request Body" body=""
	I1212 00:31:03.970409  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:03.970755  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:04.470469  525066 type.go:168] "Request Body" body=""
	I1212 00:31:04.470553  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:04.470917  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:04.470977  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:04.971081  525066 type.go:168] "Request Body" body=""
	I1212 00:31:04.971152  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:04.971443  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:05.471286  525066 type.go:168] "Request Body" body=""
	I1212 00:31:05.471363  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:05.471677  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:05.970382  525066 type.go:168] "Request Body" body=""
	I1212 00:31:05.970464  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:05.970758  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:06.470420  525066 type.go:168] "Request Body" body=""
	I1212 00:31:06.470484  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:06.470788  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:06.970555  525066 type.go:168] "Request Body" body=""
	I1212 00:31:06.970636  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:06.970974  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:06.971042  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:07.470458  525066 type.go:168] "Request Body" body=""
	I1212 00:31:07.470530  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:07.470902  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:07.970448  525066 type.go:168] "Request Body" body=""
	I1212 00:31:07.970518  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:07.970835  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:08.470450  525066 type.go:168] "Request Body" body=""
	I1212 00:31:08.470521  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:08.470867  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:08.970558  525066 type.go:168] "Request Body" body=""
	I1212 00:31:08.970638  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:08.970976  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:09.470659  525066 type.go:168] "Request Body" body=""
	I1212 00:31:09.470748  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:09.471069  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:09.471163  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:09.971114  525066 type.go:168] "Request Body" body=""
	I1212 00:31:09.971187  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:09.971512  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:10.470533  525066 type.go:168] "Request Body" body=""
	I1212 00:31:10.470613  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:10.470969  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:10.970732  525066 type.go:168] "Request Body" body=""
	I1212 00:31:10.970807  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:10.971084  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:11.470443  525066 type.go:168] "Request Body" body=""
	I1212 00:31:11.470518  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:11.470882  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:11.491140  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:31:11.552135  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:31:11.552186  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:31:11.552275  525066 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1212 00:31:11.970638  525066 type.go:168] "Request Body" body=""
	I1212 00:31:11.970757  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:11.971089  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:11.971151  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:12.470532  525066 type.go:168] "Request Body" body=""
	I1212 00:31:12.470609  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:12.470899  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:12.970411  525066 type.go:168] "Request Body" body=""
	I1212 00:31:12.970504  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:12.970815  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:13.470503  525066 type.go:168] "Request Body" body=""
	I1212 00:31:13.470574  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:13.470924  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:13.970619  525066 type.go:168] "Request Body" body=""
	I1212 00:31:13.970706  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:13.970963  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:14.470738  525066 type.go:168] "Request Body" body=""
	I1212 00:31:14.470812  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:14.471133  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:14.471187  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:14.971146  525066 type.go:168] "Request Body" body=""
	I1212 00:31:14.971222  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:14.971541  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:15.471278  525066 type.go:168] "Request Body" body=""
	I1212 00:31:15.471365  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:15.471609  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:15.970340  525066 type.go:168] "Request Body" body=""
	I1212 00:31:15.970431  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:15.970804  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:16.470509  525066 type.go:168] "Request Body" body=""
	I1212 00:31:16.470591  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:16.470920  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:16.970406  525066 type.go:168] "Request Body" body=""
	I1212 00:31:16.970483  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:16.970790  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:16.970848  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:17.470447  525066 type.go:168] "Request Body" body=""
	I1212 00:31:17.470519  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:17.470842  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:17.970550  525066 type.go:168] "Request Body" body=""
	I1212 00:31:17.970627  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:17.970937  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:18.470378  525066 type.go:168] "Request Body" body=""
	I1212 00:31:18.470445  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:18.470742  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:18.970443  525066 type.go:168] "Request Body" body=""
	I1212 00:31:18.970523  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:18.970864  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:18.970923  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:19.470460  525066 type.go:168] "Request Body" body=""
	I1212 00:31:19.470532  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:19.470860  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:19.970827  525066 type.go:168] "Request Body" body=""
	I1212 00:31:19.970900  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:19.971156  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:20.471114  525066 type.go:168] "Request Body" body=""
	I1212 00:31:20.471192  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:20.471496  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:20.971299  525066 type.go:168] "Request Body" body=""
	I1212 00:31:20.971376  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:20.971723  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:20.971777  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:21.471362  525066 type.go:168] "Request Body" body=""
	I1212 00:31:21.471430  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:21.471729  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:21.970437  525066 type.go:168] "Request Body" body=""
	I1212 00:31:21.970510  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:21.970868  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:22.470577  525066 type.go:168] "Request Body" body=""
	I1212 00:31:22.470650  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:22.470985  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:22.970698  525066 type.go:168] "Request Body" body=""
	I1212 00:31:22.970765  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:22.971007  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:23.470436  525066 type.go:168] "Request Body" body=""
	I1212 00:31:23.470511  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:23.470861  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:23.470914  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:23.970561  525066 type.go:168] "Request Body" body=""
	I1212 00:31:23.970643  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:23.970973  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:24.470353  525066 type.go:168] "Request Body" body=""
	I1212 00:31:24.470466  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:24.470739  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:24.970663  525066 type.go:168] "Request Body" body=""
	I1212 00:31:24.970762  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:24.971091  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:25.470468  525066 type.go:168] "Request Body" body=""
	I1212 00:31:25.470542  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:25.470865  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:25.970393  525066 type.go:168] "Request Body" body=""
	I1212 00:31:25.970464  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:25.970807  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:25.970864  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:26.345425  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:31:26.402811  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:31:26.406955  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:31:26.407059  525066 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1212 00:31:26.410095  525066 out.go:179] * Enabled addons: 
	I1212 00:31:26.413891  525066 addons.go:530] duration metric: took 1m46.38627975s for enable addons: enabled=[]
	I1212 00:31:26.471160  525066 type.go:168] "Request Body" body=""
	I1212 00:31:26.471237  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:26.471562  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:26.971357  525066 type.go:168] "Request Body" body=""
	I1212 00:31:26.971432  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:26.971737  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:27.470432  525066 type.go:168] "Request Body" body=""
	I1212 00:31:27.470500  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:27.470799  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:27.970424  525066 type.go:168] "Request Body" body=""
	I1212 00:31:27.970502  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:27.970862  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:27.970917  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:28.470589  525066 type.go:168] "Request Body" body=""
	I1212 00:31:28.470667  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:28.470983  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:28.970654  525066 type.go:168] "Request Body" body=""
	I1212 00:31:28.970741  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:28.970990  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:29.470447  525066 type.go:168] "Request Body" body=""
	I1212 00:31:29.470518  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:29.470827  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:29.970750  525066 type.go:168] "Request Body" body=""
	I1212 00:31:29.970827  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:29.971160  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:29.971218  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:30.471043  525066 type.go:168] "Request Body" body=""
	I1212 00:31:30.471109  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:30.471376  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:30.971171  525066 type.go:168] "Request Body" body=""
	I1212 00:31:30.971241  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:30.971550  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:31.471358  525066 type.go:168] "Request Body" body=""
	I1212 00:31:31.471445  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:31.471839  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:31.970419  525066 type.go:168] "Request Body" body=""
	I1212 00:31:31.970484  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:31.970752  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:32.470476  525066 type.go:168] "Request Body" body=""
	I1212 00:31:32.470545  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:32.470896  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:32.470960  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:32.970646  525066 type.go:168] "Request Body" body=""
	I1212 00:31:32.970737  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:32.971068  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:33.470394  525066 type.go:168] "Request Body" body=""
	I1212 00:31:33.470464  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:33.470730  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:33.970441  525066 type.go:168] "Request Body" body=""
	I1212 00:31:33.970512  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:33.970887  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:34.470452  525066 type.go:168] "Request Body" body=""
	I1212 00:31:34.470528  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:34.471050  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:34.471101  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:34.971076  525066 type.go:168] "Request Body" body=""
	I1212 00:31:34.971150  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:34.971412  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:35.471345  525066 type.go:168] "Request Body" body=""
	I1212 00:31:35.471417  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:35.471701  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:35.970416  525066 type.go:168] "Request Body" body=""
	I1212 00:31:35.970491  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:35.970794  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:36.470413  525066 type.go:168] "Request Body" body=""
	I1212 00:31:36.470483  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:36.470801  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:36.970488  525066 type.go:168] "Request Body" body=""
	I1212 00:31:36.970578  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:36.970944  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:36.970998  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:37.470498  525066 type.go:168] "Request Body" body=""
	I1212 00:31:37.470572  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:37.470867  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:37.970414  525066 type.go:168] "Request Body" body=""
	I1212 00:31:37.970484  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:37.970840  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:38.470560  525066 type.go:168] "Request Body" body=""
	I1212 00:31:38.470640  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:38.470997  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:38.970458  525066 type.go:168] "Request Body" body=""
	I1212 00:31:38.970542  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:38.970875  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:39.470418  525066 type.go:168] "Request Body" body=""
	I1212 00:31:39.470483  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:39.470746  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:39.470792  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:39.970766  525066 type.go:168] "Request Body" body=""
	I1212 00:31:39.970840  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:39.971186  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:40.470676  525066 type.go:168] "Request Body" body=""
	I1212 00:31:40.470773  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:40.471187  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:40.970410  525066 type.go:168] "Request Body" body=""
	I1212 00:31:40.970498  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:40.970933  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:41.470798  525066 type.go:168] "Request Body" body=""
	I1212 00:31:41.470881  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:41.471270  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:41.471324  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:41.971113  525066 type.go:168] "Request Body" body=""
	I1212 00:31:41.971189  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:41.971540  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:42.471293  525066 type.go:168] "Request Body" body=""
	I1212 00:31:42.471364  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:42.471623  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:42.971377  525066 type.go:168] "Request Body" body=""
	I1212 00:31:42.971447  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:42.971777  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:43.470484  525066 type.go:168] "Request Body" body=""
	I1212 00:31:43.470559  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:43.470898  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:43.970565  525066 type.go:168] "Request Body" body=""
	I1212 00:31:43.970635  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:43.970946  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:43.970995  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:44.470714  525066 type.go:168] "Request Body" body=""
	I1212 00:31:44.470786  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:44.471067  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:44.970961  525066 type.go:168] "Request Body" body=""
	I1212 00:31:44.971037  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:44.971349  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:45.471086  525066 type.go:168] "Request Body" body=""
	I1212 00:31:45.471160  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:45.471425  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:45.971255  525066 type.go:168] "Request Body" body=""
	I1212 00:31:45.971369  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:45.971732  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:45.971788  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:46.470449  525066 type.go:168] "Request Body" body=""
	I1212 00:31:46.470522  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:46.470852  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:46.970385  525066 type.go:168] "Request Body" body=""
	I1212 00:31:46.970452  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:46.970722  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:47.470437  525066 type.go:168] "Request Body" body=""
	I1212 00:31:47.470516  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:47.471054  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:47.970787  525066 type.go:168] "Request Body" body=""
	I1212 00:31:47.970859  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:47.971237  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:48.470392  525066 type.go:168] "Request Body" body=""
	I1212 00:31:48.470462  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:48.470738  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:48.470782  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:48.970447  525066 type.go:168] "Request Body" body=""
	I1212 00:31:48.970518  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:48.970858  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:49.470546  525066 type.go:168] "Request Body" body=""
	I1212 00:31:49.470624  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:49.470988  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:49.970873  525066 type.go:168] "Request Body" body=""
	I1212 00:31:49.970945  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:49.971253  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:50.471037  525066 type.go:168] "Request Body" body=""
	I1212 00:31:50.471108  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:50.471396  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:50.471445  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:50.971208  525066 type.go:168] "Request Body" body=""
	I1212 00:31:50.971282  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:50.971603  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:51.471213  525066 type.go:168] "Request Body" body=""
	I1212 00:31:51.471279  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:51.471540  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:51.971366  525066 type.go:168] "Request Body" body=""
	I1212 00:31:51.971439  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:51.971745  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:52.470473  525066 type.go:168] "Request Body" body=""
	I1212 00:31:52.470565  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:52.470989  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:52.970405  525066 type.go:168] "Request Body" body=""
	I1212 00:31:52.970478  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:52.970761  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:52.970811  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:53.470446  525066 type.go:168] "Request Body" body=""
	I1212 00:31:53.470521  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:53.470872  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:53.970497  525066 type.go:168] "Request Body" body=""
	I1212 00:31:53.970576  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:53.970925  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:54.470437  525066 type.go:168] "Request Body" body=""
	I1212 00:31:54.470505  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:54.470810  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:54.970825  525066 type.go:168] "Request Body" body=""
	I1212 00:31:54.970901  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:54.971247  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:54.971305  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:55.471027  525066 type.go:168] "Request Body" body=""
	I1212 00:31:55.471109  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:55.471438  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:55.971085  525066 type.go:168] "Request Body" body=""
	I1212 00:31:55.971149  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:55.971395  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:56.471224  525066 type.go:168] "Request Body" body=""
	I1212 00:31:56.471307  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:56.471633  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:56.970393  525066 type.go:168] "Request Body" body=""
	I1212 00:31:56.970474  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:56.970791  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:57.470420  525066 type.go:168] "Request Body" body=""
	I1212 00:31:57.470487  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:57.470757  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:57.470801  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:57.970446  525066 type.go:168] "Request Body" body=""
	I1212 00:31:57.970526  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:57.970833  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:58.470459  525066 type.go:168] "Request Body" body=""
	I1212 00:31:58.470532  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:58.470885  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:58.970572  525066 type.go:168] "Request Body" body=""
	I1212 00:31:58.970646  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:58.970920  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:59.470458  525066 type.go:168] "Request Body" body=""
	I1212 00:31:59.470548  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:59.470889  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:59.470948  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:59.970930  525066 type.go:168] "Request Body" body=""
	I1212 00:31:59.971026  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:59.971389  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:00.470941  525066 type.go:168] "Request Body" body=""
	I1212 00:32:00.471069  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:00.471359  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:00.971229  525066 type.go:168] "Request Body" body=""
	I1212 00:32:00.971307  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:00.971647  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:01.470368  525066 type.go:168] "Request Body" body=""
	I1212 00:32:01.470445  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:01.470795  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:01.970448  525066 type.go:168] "Request Body" body=""
	I1212 00:32:01.970521  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:01.970816  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:01.970862  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:02.470549  525066 type.go:168] "Request Body" body=""
	I1212 00:32:02.470624  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:02.470998  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:02.970773  525066 type.go:168] "Request Body" body=""
	I1212 00:32:02.970858  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:02.971312  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:03.471100  525066 type.go:168] "Request Body" body=""
	I1212 00:32:03.471172  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:03.471473  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:03.971278  525066 type.go:168] "Request Body" body=""
	I1212 00:32:03.971356  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:03.971686  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:03.971737  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:04.470407  525066 type.go:168] "Request Body" body=""
	I1212 00:32:04.470491  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:04.470843  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:04.970701  525066 type.go:168] "Request Body" body=""
	I1212 00:32:04.970771  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:04.971049  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:05.470747  525066 type.go:168] "Request Body" body=""
	I1212 00:32:05.470824  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:05.471189  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:05.970759  525066 type.go:168] "Request Body" body=""
	I1212 00:32:05.970838  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:05.971177  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:06.470915  525066 type.go:168] "Request Body" body=""
	I1212 00:32:06.470997  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:06.471253  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:06.471294  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:06.971057  525066 type.go:168] "Request Body" body=""
	I1212 00:32:06.971134  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:06.971488  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:07.471269  525066 type.go:168] "Request Body" body=""
	I1212 00:32:07.471344  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:07.471670  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:07.970352  525066 type.go:168] "Request Body" body=""
	I1212 00:32:07.970421  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:07.970747  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:08.470438  525066 type.go:168] "Request Body" body=""
	I1212 00:32:08.470509  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:08.470878  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:08.970449  525066 type.go:168] "Request Body" body=""
	I1212 00:32:08.970521  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:08.970871  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:08.970925  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:09.470391  525066 type.go:168] "Request Body" body=""
	I1212 00:32:09.470470  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:09.470756  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:09.970703  525066 type.go:168] "Request Body" body=""
	I1212 00:32:09.970779  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:09.971116  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:10.471033  525066 type.go:168] "Request Body" body=""
	I1212 00:32:10.471109  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:10.471417  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:10.971169  525066 type.go:168] "Request Body" body=""
	I1212 00:32:10.971238  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:10.971496  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:10.971539  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:11.471372  525066 type.go:168] "Request Body" body=""
	I1212 00:32:11.471451  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:11.471770  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:11.970469  525066 type.go:168] "Request Body" body=""
	I1212 00:32:11.970586  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:11.970898  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:12.470383  525066 type.go:168] "Request Body" body=""
	I1212 00:32:12.470453  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:12.470788  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:12.970473  525066 type.go:168] "Request Body" body=""
	I1212 00:32:12.970545  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:12.970889  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:13.470464  525066 type.go:168] "Request Body" body=""
	I1212 00:32:13.470555  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:13.470934  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:13.470994  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:13.970655  525066 type.go:168] "Request Body" body=""
	I1212 00:32:13.970754  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:13.971092  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:14.470457  525066 type.go:168] "Request Body" body=""
	I1212 00:32:14.470538  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:14.470903  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:14.970794  525066 type.go:168] "Request Body" body=""
	I1212 00:32:14.970878  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:14.971205  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:15.470971  525066 type.go:168] "Request Body" body=""
	I1212 00:32:15.471055  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:15.471372  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:15.471414  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:15.971237  525066 type.go:168] "Request Body" body=""
	I1212 00:32:15.971320  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:15.971640  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:16.470370  525066 type.go:168] "Request Body" body=""
	I1212 00:32:16.470445  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:16.470782  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:16.970410  525066 type.go:168] "Request Body" body=""
	I1212 00:32:16.970484  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:16.970823  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:17.470439  525066 type.go:168] "Request Body" body=""
	I1212 00:32:17.470517  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:17.470864  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:17.970590  525066 type.go:168] "Request Body" body=""
	I1212 00:32:17.970664  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:17.971024  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:17.971078  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:18.470738  525066 type.go:168] "Request Body" body=""
	I1212 00:32:18.470805  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:18.471184  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:18.971020  525066 type.go:168] "Request Body" body=""
	I1212 00:32:18.971105  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:18.971458  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:19.471116  525066 type.go:168] "Request Body" body=""
	I1212 00:32:19.471188  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:19.471515  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:19.970337  525066 type.go:168] "Request Body" body=""
	I1212 00:32:19.970412  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:19.970828  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:20.471213  525066 type.go:168] "Request Body" body=""
	I1212 00:32:20.471293  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:20.471629  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:20.471692  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:20.970398  525066 type.go:168] "Request Body" body=""
	I1212 00:32:20.970472  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:20.970846  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:21.470529  525066 type.go:168] "Request Body" body=""
	I1212 00:32:21.470596  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:21.470869  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:21.970458  525066 type.go:168] "Request Body" body=""
	I1212 00:32:21.970531  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:21.970901  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:22.470489  525066 type.go:168] "Request Body" body=""
	I1212 00:32:22.470578  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:22.470923  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:22.970628  525066 type.go:168] "Request Body" body=""
	I1212 00:32:22.970719  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:22.970989  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:22.971032  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:23.470448  525066 type.go:168] "Request Body" body=""
	I1212 00:32:23.470535  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:23.470894  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:23.970458  525066 type.go:168] "Request Body" body=""
	I1212 00:32:23.970535  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:23.970896  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:24.470332  525066 type.go:168] "Request Body" body=""
	I1212 00:32:24.470407  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:24.470716  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:24.970747  525066 type.go:168] "Request Body" body=""
	I1212 00:32:24.970820  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:24.971165  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:24.971223  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:25.471031  525066 type.go:168] "Request Body" body=""
	I1212 00:32:25.471128  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:25.471490  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:25.971208  525066 type.go:168] "Request Body" body=""
	I1212 00:32:25.971275  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:25.971541  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:26.471290  525066 type.go:168] "Request Body" body=""
	I1212 00:32:26.471368  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:26.471700  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:26.970438  525066 type.go:168] "Request Body" body=""
	I1212 00:32:26.970517  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:26.970866  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:27.470424  525066 type.go:168] "Request Body" body=""
	I1212 00:32:27.470499  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:27.470866  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:27.470914  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:27.970455  525066 type.go:168] "Request Body" body=""
	I1212 00:32:27.970540  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:27.970913  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:28.470649  525066 type.go:168] "Request Body" body=""
	I1212 00:32:28.470738  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:28.471036  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:28.970429  525066 type.go:168] "Request Body" body=""
	I1212 00:32:28.970500  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:28.970820  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:29.470446  525066 type.go:168] "Request Body" body=""
	I1212 00:32:29.470523  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:29.470857  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:29.970808  525066 type.go:168] "Request Body" body=""
	I1212 00:32:29.970884  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:29.971262  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:29.971318  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:30.471202  525066 type.go:168] "Request Body" body=""
	I1212 00:32:30.471270  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:30.471570  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:30.971249  525066 type.go:168] "Request Body" body=""
	I1212 00:32:30.971333  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:30.971675  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:31.470379  525066 type.go:168] "Request Body" body=""
	I1212 00:32:31.470456  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:31.470795  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:31.970429  525066 type.go:168] "Request Body" body=""
	I1212 00:32:31.970500  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:31.970830  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:32.470463  525066 type.go:168] "Request Body" body=""
	I1212 00:32:32.470546  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:32.470916  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:32.470969  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:32.970507  525066 type.go:168] "Request Body" body=""
	I1212 00:32:32.970586  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:32.970962  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:33.470645  525066 type.go:168] "Request Body" body=""
	I1212 00:32:33.470725  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:33.471027  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:33.970436  525066 type.go:168] "Request Body" body=""
	I1212 00:32:33.970515  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:33.970871  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:34.470554  525066 type.go:168] "Request Body" body=""
	I1212 00:32:34.470623  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:34.470962  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:34.471031  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:34.970790  525066 type.go:168] "Request Body" body=""
	I1212 00:32:34.970868  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:34.971195  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:35.470980  525066 type.go:168] "Request Body" body=""
	I1212 00:32:35.471052  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:35.471397  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:35.971176  525066 type.go:168] "Request Body" body=""
	I1212 00:32:35.971249  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:35.971579  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:36.471345  525066 type.go:168] "Request Body" body=""
	I1212 00:32:36.471420  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:36.471668  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:36.471709  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:36.970372  525066 type.go:168] "Request Body" body=""
	I1212 00:32:36.970448  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:36.970769  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:37.470433  525066 type.go:168] "Request Body" body=""
	I1212 00:32:37.470509  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:37.470827  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:37.970393  525066 type.go:168] "Request Body" body=""
	I1212 00:32:37.970466  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:37.970813  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:38.470425  525066 type.go:168] "Request Body" body=""
	I1212 00:32:38.470496  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:38.470842  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:38.970540  525066 type.go:168] "Request Body" body=""
	I1212 00:32:38.970620  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:38.970960  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:38.971034  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:39.470693  525066 type.go:168] "Request Body" body=""
	I1212 00:32:39.470760  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:39.471016  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:39.970946  525066 type.go:168] "Request Body" body=""
	I1212 00:32:39.971029  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:39.971356  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:40.470492  525066 type.go:168] "Request Body" body=""
	I1212 00:32:40.470569  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:40.470930  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:40.970610  525066 type.go:168] "Request Body" body=""
	I1212 00:32:40.970675  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:40.970950  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:41.470442  525066 type.go:168] "Request Body" body=""
	I1212 00:32:41.470517  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:41.470854  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:41.470914  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:41.970457  525066 type.go:168] "Request Body" body=""
	I1212 00:32:41.970529  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:41.970877  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:42.470405  525066 type.go:168] "Request Body" body=""
	I1212 00:32:42.470478  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:42.470764  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:42.970423  525066 type.go:168] "Request Body" body=""
	I1212 00:32:42.970491  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:42.970824  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:43.470427  525066 type.go:168] "Request Body" body=""
	I1212 00:32:43.470499  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:43.470854  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:43.970397  525066 type.go:168] "Request Body" body=""
	I1212 00:32:43.970461  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:43.970746  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:43.970790  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:44.470423  525066 type.go:168] "Request Body" body=""
	I1212 00:32:44.470501  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:44.470842  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:44.970773  525066 type.go:168] "Request Body" body=""
	I1212 00:32:44.970846  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:44.971174  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:45.470779  525066 type.go:168] "Request Body" body=""
	I1212 00:32:45.470852  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:45.471113  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:45.970429  525066 type.go:168] "Request Body" body=""
	I1212 00:32:45.970512  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:45.970857  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:45.970913  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:46.470599  525066 type.go:168] "Request Body" body=""
	I1212 00:32:46.470698  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:46.471036  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:46.970724  525066 type.go:168] "Request Body" body=""
	I1212 00:32:46.970811  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:46.971101  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:47.470415  525066 type.go:168] "Request Body" body=""
	I1212 00:32:47.470487  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:47.470829  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:47.970458  525066 type.go:168] "Request Body" body=""
	I1212 00:32:47.970529  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:47.970877  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:47.970934  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:48.470399  525066 type.go:168] "Request Body" body=""
	I1212 00:32:48.470468  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:48.470756  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:48.970465  525066 type.go:168] "Request Body" body=""
	I1212 00:32:48.970543  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:48.970922  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:49.470651  525066 type.go:168] "Request Body" body=""
	I1212 00:32:49.470742  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:49.471098  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:49.970876  525066 type.go:168] "Request Body" body=""
	I1212 00:32:49.970959  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:49.971229  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:49.971270  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:50.471244  525066 type.go:168] "Request Body" body=""
	I1212 00:32:50.471322  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:50.471670  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:50.970390  525066 type.go:168] "Request Body" body=""
	I1212 00:32:50.970471  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:50.970817  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:51.470505  525066 type.go:168] "Request Body" body=""
	I1212 00:32:51.470577  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:51.470954  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:51.970430  525066 type.go:168] "Request Body" body=""
	I1212 00:32:51.970500  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:51.970854  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:52.470564  525066 type.go:168] "Request Body" body=""
	I1212 00:32:52.470637  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:52.471003  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:52.471056  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:52.970380  525066 type.go:168] "Request Body" body=""
	I1212 00:32:52.970448  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:52.970779  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:53.470481  525066 type.go:168] "Request Body" body=""
	I1212 00:32:53.470554  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:53.470926  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:53.970647  525066 type.go:168] "Request Body" body=""
	I1212 00:32:53.970737  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:53.971090  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:54.471319  525066 type.go:168] "Request Body" body=""
	I1212 00:32:54.471392  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:54.471642  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:54.471682  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:54.970626  525066 type.go:168] "Request Body" body=""
	I1212 00:32:54.970705  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:54.971020  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:55.470451  525066 type.go:168] "Request Body" body=""
	I1212 00:32:55.470522  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:55.470854  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:55.970384  525066 type.go:168] "Request Body" body=""
	I1212 00:32:55.970461  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:55.970755  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:56.470437  525066 type.go:168] "Request Body" body=""
	I1212 00:32:56.470521  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:56.470867  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:56.970577  525066 type.go:168] "Request Body" body=""
	I1212 00:32:56.970650  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:56.971023  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:56.971077  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:57.470742  525066 type.go:168] "Request Body" body=""
	I1212 00:32:57.470815  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:57.471167  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:57.970872  525066 type.go:168] "Request Body" body=""
	I1212 00:32:57.970953  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:57.971280  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:58.471062  525066 type.go:168] "Request Body" body=""
	I1212 00:32:58.471134  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:58.471462  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:58.971255  525066 type.go:168] "Request Body" body=""
	I1212 00:32:58.971322  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:58.971578  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:58.971620  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:59.471341  525066 type.go:168] "Request Body" body=""
	I1212 00:32:59.471410  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:59.471735  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:59.970614  525066 type.go:168] "Request Body" body=""
	I1212 00:32:59.970716  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:59.971048  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:00.470331  525066 type.go:168] "Request Body" body=""
	I1212 00:33:00.470413  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:00.470671  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:00.970395  525066 type.go:168] "Request Body" body=""
	I1212 00:33:00.970485  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:00.970885  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:01.470464  525066 type.go:168] "Request Body" body=""
	I1212 00:33:01.470537  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:01.470879  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:01.470943  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:01.970438  525066 type.go:168] "Request Body" body=""
	I1212 00:33:01.970506  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:01.970852  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:02.470619  525066 type.go:168] "Request Body" body=""
	I1212 00:33:02.470714  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:02.471075  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:02.970791  525066 type.go:168] "Request Body" body=""
	I1212 00:33:02.970863  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:02.971208  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:03.470951  525066 type.go:168] "Request Body" body=""
	I1212 00:33:03.471027  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:03.471358  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:03.471426  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:03.971137  525066 type.go:168] "Request Body" body=""
	I1212 00:33:03.971208  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:03.971542  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:04.471345  525066 type.go:168] "Request Body" body=""
	I1212 00:33:04.471415  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:04.471746  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:04.970402  525066 type.go:168] "Request Body" body=""
	I1212 00:33:04.970479  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:04.970766  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:05.470439  525066 type.go:168] "Request Body" body=""
	I1212 00:33:05.470511  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:05.470849  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:05.970564  525066 type.go:168] "Request Body" body=""
	I1212 00:33:05.970637  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:05.970984  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:05.971040  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:06.470424  525066 type.go:168] "Request Body" body=""
	I1212 00:33:06.470502  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:06.470781  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:06.970465  525066 type.go:168] "Request Body" body=""
	I1212 00:33:06.970547  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:06.970898  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:07.470461  525066 type.go:168] "Request Body" body=""
	I1212 00:33:07.470546  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:07.470897  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:07.970572  525066 type.go:168] "Request Body" body=""
	I1212 00:33:07.970648  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:07.970982  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:08.470661  525066 type.go:168] "Request Body" body=""
	I1212 00:33:08.470757  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:08.471101  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:08.471155  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:08.970834  525066 type.go:168] "Request Body" body=""
	I1212 00:33:08.970915  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:08.971261  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:09.471007  525066 type.go:168] "Request Body" body=""
	I1212 00:33:09.471080  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:09.471383  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:09.971222  525066 type.go:168] "Request Body" body=""
	I1212 00:33:09.971292  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:09.971613  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:10.470464  525066 type.go:168] "Request Body" body=""
	I1212 00:33:10.470536  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:10.470871  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:10.970418  525066 type.go:168] "Request Body" body=""
	I1212 00:33:10.970483  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:10.970802  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:10.970858  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:11.470473  525066 type.go:168] "Request Body" body=""
	I1212 00:33:11.470545  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:11.470900  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:11.970438  525066 type.go:168] "Request Body" body=""
	I1212 00:33:11.970517  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:11.970868  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:12.470560  525066 type.go:168] "Request Body" body=""
	I1212 00:33:12.470632  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:12.470928  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:12.970459  525066 type.go:168] "Request Body" body=""
	I1212 00:33:12.970538  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:12.970885  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:12.970943  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:13.470422  525066 type.go:168] "Request Body" body=""
	I1212 00:33:13.470501  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:13.470840  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:13.970429  525066 type.go:168] "Request Body" body=""
	I1212 00:33:13.970498  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:13.970796  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:14.470442  525066 type.go:168] "Request Body" body=""
	I1212 00:33:14.470515  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:14.470869  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:14.970766  525066 type.go:168] "Request Body" body=""
	I1212 00:33:14.970842  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:14.971184  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:14.971238  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:15.470940  525066 type.go:168] "Request Body" body=""
	I1212 00:33:15.471011  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:15.471271  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:15.971043  525066 type.go:168] "Request Body" body=""
	I1212 00:33:15.971115  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:15.971480  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:16.471281  525066 type.go:168] "Request Body" body=""
	I1212 00:33:16.471357  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:16.471735  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:16.970410  525066 type.go:168] "Request Body" body=""
	I1212 00:33:16.970477  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:16.970775  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:17.470446  525066 type.go:168] "Request Body" body=""
	I1212 00:33:17.470524  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:17.470842  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:17.470887  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:17.970551  525066 type.go:168] "Request Body" body=""
	I1212 00:33:17.970623  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:17.970977  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:18.470481  525066 type.go:168] "Request Body" body=""
	I1212 00:33:18.470557  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:18.470882  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:18.970446  525066 type.go:168] "Request Body" body=""
	I1212 00:33:18.970516  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:18.970881  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:19.470588  525066 type.go:168] "Request Body" body=""
	I1212 00:33:19.470670  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:19.471039  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:19.471096  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:19.970916  525066 type.go:168] "Request Body" body=""
	I1212 00:33:19.971010  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:19.971330  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:20.471261  525066 type.go:168] "Request Body" body=""
	I1212 00:33:20.471340  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:20.471686  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:20.970404  525066 type.go:168] "Request Body" body=""
	I1212 00:33:20.970481  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:20.970812  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:21.470406  525066 type.go:168] "Request Body" body=""
	I1212 00:33:21.470472  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:21.470744  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:21.970441  525066 type.go:168] "Request Body" body=""
	I1212 00:33:21.970514  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:21.970842  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:21.970902  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:22.470458  525066 type.go:168] "Request Body" body=""
	I1212 00:33:22.470541  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:22.470874  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:22.970430  525066 type.go:168] "Request Body" body=""
	I1212 00:33:22.970499  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:22.970829  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:23.470435  525066 type.go:168] "Request Body" body=""
	I1212 00:33:23.470504  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:23.470830  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:23.970480  525066 type.go:168] "Request Body" body=""
	I1212 00:33:23.970555  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:23.970905  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:23.970965  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:24.470433  525066 type.go:168] "Request Body" body=""
	I1212 00:33:24.470506  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:24.470783  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:24.970803  525066 type.go:168] "Request Body" body=""
	I1212 00:33:24.970886  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:24.971251  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:25.471119  525066 type.go:168] "Request Body" body=""
	I1212 00:33:25.471192  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:25.471497  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:25.971209  525066 type.go:168] "Request Body" body=""
	I1212 00:33:25.971285  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:25.971580  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:25.971621  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:26.470369  525066 type.go:168] "Request Body" body=""
	I1212 00:33:26.470445  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:26.470776  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:26.970409  525066 type.go:168] "Request Body" body=""
	I1212 00:33:26.970481  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:26.970791  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:27.470405  525066 type.go:168] "Request Body" body=""
	I1212 00:33:27.470473  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:27.470753  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:27.970463  525066 type.go:168] "Request Body" body=""
	I1212 00:33:27.970537  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:27.970900  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:28.470443  525066 type.go:168] "Request Body" body=""
	I1212 00:33:28.470515  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:28.470860  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:28.470912  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:28.970541  525066 type.go:168] "Request Body" body=""
	I1212 00:33:28.970608  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:28.970887  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:29.470489  525066 type.go:168] "Request Body" body=""
	I1212 00:33:29.470563  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:29.470897  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:29.970848  525066 type.go:168] "Request Body" body=""
	I1212 00:33:29.970931  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:29.971286  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:30.470592  525066 type.go:168] "Request Body" body=""
	I1212 00:33:30.470659  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:30.470963  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:30.471010  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:30.970450  525066 type.go:168] "Request Body" body=""
	I1212 00:33:30.970524  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:30.970868  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:31.470588  525066 type.go:168] "Request Body" body=""
	I1212 00:33:31.470666  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:31.471003  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:31.970391  525066 type.go:168] "Request Body" body=""
	I1212 00:33:31.970461  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:31.970788  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:32.470477  525066 type.go:168] "Request Body" body=""
	I1212 00:33:32.470550  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:32.470900  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:32.970483  525066 type.go:168] "Request Body" body=""
	I1212 00:33:32.970557  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:32.970910  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:32.970970  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:33.470618  525066 type.go:168] "Request Body" body=""
	I1212 00:33:33.470712  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:33.470974  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:33.970444  525066 type.go:168] "Request Body" body=""
	I1212 00:33:33.970517  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:33.970882  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:34.470466  525066 type.go:168] "Request Body" body=""
	I1212 00:33:34.470544  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:34.470888  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:34.974811  525066 type.go:168] "Request Body" body=""
	I1212 00:33:34.974888  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:34.975210  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:34.975263  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:35.470393  525066 type.go:168] "Request Body" body=""
	I1212 00:33:35.470475  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:35.470774  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:35.970447  525066 type.go:168] "Request Body" body=""
	I1212 00:33:35.970520  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:35.970921  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:36.470630  525066 type.go:168] "Request Body" body=""
	I1212 00:33:36.470714  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:36.470977  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:36.970447  525066 type.go:168] "Request Body" body=""
	I1212 00:33:36.970519  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:36.970841  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:37.470461  525066 type.go:168] "Request Body" body=""
	I1212 00:33:37.470545  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:37.470921  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:37.470982  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:37.970408  525066 type.go:168] "Request Body" body=""
	I1212 00:33:37.970475  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:37.970748  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:38.470525  525066 type.go:168] "Request Body" body=""
	I1212 00:33:38.470598  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:38.470966  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:38.970662  525066 type.go:168] "Request Body" body=""
	I1212 00:33:38.970757  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:38.971088  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:39.470794  525066 type.go:168] "Request Body" body=""
	I1212 00:33:39.470866  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:39.471135  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:39.471185  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:39.971164  525066 type.go:168] "Request Body" body=""
	I1212 00:33:39.971246  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:39.971584  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:40.470516  525066 type.go:168] "Request Body" body=""
	I1212 00:33:40.470591  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:40.470924  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:40.970415  525066 type.go:168] "Request Body" body=""
	I1212 00:33:40.970485  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:40.971060  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:41.470451  525066 type.go:168] "Request Body" body=""
	I1212 00:33:41.470587  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:41.470946  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:41.970903  525066 type.go:168] "Request Body" body=""
	I1212 00:33:41.970980  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:41.971291  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:41.971344  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:42.471094  525066 type.go:168] "Request Body" body=""
	I1212 00:33:42.471178  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:42.471572  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:42.971249  525066 type.go:168] "Request Body" body=""
	I1212 00:33:42.971320  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:42.971651  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:43.470400  525066 type.go:168] "Request Body" body=""
	I1212 00:33:43.470476  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:43.470790  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:43.970434  525066 type.go:168] "Request Body" body=""
	I1212 00:33:43.970503  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:43.970849  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:44.470442  525066 type.go:168] "Request Body" body=""
	I1212 00:33:44.470512  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:44.470828  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:44.470882  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:44.970788  525066 type.go:168] "Request Body" body=""
	I1212 00:33:44.970861  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:44.971179  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:45.471027  525066 type.go:168] "Request Body" body=""
	I1212 00:33:45.471095  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:45.471384  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:45.971175  525066 type.go:168] "Request Body" body=""
	I1212 00:33:45.971254  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:45.971602  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:46.471390  525066 type.go:168] "Request Body" body=""
	I1212 00:33:46.471469  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:46.471785  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:46.471843  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:46.970475  525066 type.go:168] "Request Body" body=""
	I1212 00:33:46.970546  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:46.970906  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:47.470431  525066 type.go:168] "Request Body" body=""
	I1212 00:33:47.470507  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:47.470868  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:47.970663  525066 type.go:168] "Request Body" body=""
	I1212 00:33:47.970759  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:47.971097  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:48.470801  525066 type.go:168] "Request Body" body=""
	I1212 00:33:48.470875  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:48.471136  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:48.970447  525066 type.go:168] "Request Body" body=""
	I1212 00:33:48.970518  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:48.970883  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:48.970943  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:49.470484  525066 type.go:168] "Request Body" body=""
	I1212 00:33:49.470558  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:49.470901  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:49.970711  525066 type.go:168] "Request Body" body=""
	I1212 00:33:49.970783  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:49.971100  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:50.471121  525066 type.go:168] "Request Body" body=""
	I1212 00:33:50.471191  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:50.471492  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:50.971285  525066 type.go:168] "Request Body" body=""
	I1212 00:33:50.971368  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:50.971704  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:50.971758  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:51.470388  525066 type.go:168] "Request Body" body=""
	I1212 00:33:51.470461  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:51.470790  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:51.970459  525066 type.go:168] "Request Body" body=""
	I1212 00:33:51.970536  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:51.970878  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:52.470599  525066 type.go:168] "Request Body" body=""
	I1212 00:33:52.470668  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:52.471040  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:52.970519  525066 type.go:168] "Request Body" body=""
	I1212 00:33:52.970595  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:52.970943  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:53.470479  525066 type.go:168] "Request Body" body=""
	I1212 00:33:53.470557  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:53.470898  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:53.470998  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:53.970658  525066 type.go:168] "Request Body" body=""
	I1212 00:33:53.970752  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:53.971087  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:54.470382  525066 type.go:168] "Request Body" body=""
	I1212 00:33:54.470475  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:54.470745  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:54.971392  525066 type.go:168] "Request Body" body=""
	I1212 00:33:54.971460  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:54.971785  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:55.470484  525066 type.go:168] "Request Body" body=""
	I1212 00:33:55.470559  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:55.470901  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:55.970438  525066 type.go:168] "Request Body" body=""
	I1212 00:33:55.970506  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:55.970773  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:55.970819  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:56.470532  525066 type.go:168] "Request Body" body=""
	I1212 00:33:56.470620  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:56.470993  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:56.970746  525066 type.go:168] "Request Body" body=""
	I1212 00:33:56.970823  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:56.971164  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:57.470798  525066 type.go:168] "Request Body" body=""
	I1212 00:33:57.470887  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:57.471201  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:57.970983  525066 type.go:168] "Request Body" body=""
	I1212 00:33:57.971057  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:57.971379  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:57.971435  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:58.471288  525066 type.go:168] "Request Body" body=""
	I1212 00:33:58.471374  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:58.471710  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:58.970417  525066 type.go:168] "Request Body" body=""
	I1212 00:33:58.970485  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:58.970786  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:59.470436  525066 type.go:168] "Request Body" body=""
	I1212 00:33:59.470537  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:59.470835  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:59.970792  525066 type.go:168] "Request Body" body=""
	I1212 00:33:59.970864  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:59.971190  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:00.471100  525066 type.go:168] "Request Body" body=""
	I1212 00:34:00.471178  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:00.471446  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:00.471491  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:00.971317  525066 type.go:168] "Request Body" body=""
	I1212 00:34:00.971392  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:00.971704  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:01.470405  525066 type.go:168] "Request Body" body=""
	I1212 00:34:01.470478  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:01.470824  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:01.970499  525066 type.go:168] "Request Body" body=""
	I1212 00:34:01.970587  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:01.970878  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:02.470450  525066 type.go:168] "Request Body" body=""
	I1212 00:34:02.470548  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:02.470875  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:02.970486  525066 type.go:168] "Request Body" body=""
	I1212 00:34:02.970558  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:02.970903  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:02.970960  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:03.470466  525066 type.go:168] "Request Body" body=""
	I1212 00:34:03.470543  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:03.470886  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:03.970447  525066 type.go:168] "Request Body" body=""
	I1212 00:34:03.970517  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:03.970835  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:04.470526  525066 type.go:168] "Request Body" body=""
	I1212 00:34:04.470601  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:04.470936  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:04.970883  525066 type.go:168] "Request Body" body=""
	I1212 00:34:04.970968  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:04.971228  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:04.971278  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:05.471031  525066 type.go:168] "Request Body" body=""
	I1212 00:34:05.471105  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:05.471416  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:05.971158  525066 type.go:168] "Request Body" body=""
	I1212 00:34:05.971232  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:05.971554  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:06.471186  525066 type.go:168] "Request Body" body=""
	I1212 00:34:06.471254  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:06.471579  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:06.971380  525066 type.go:168] "Request Body" body=""
	I1212 00:34:06.971454  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:06.971795  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:06.971845  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:07.470453  525066 type.go:168] "Request Body" body=""
	I1212 00:34:07.470532  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:07.470891  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:07.970570  525066 type.go:168] "Request Body" body=""
	I1212 00:34:07.970640  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:07.970974  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:08.470454  525066 type.go:168] "Request Body" body=""
	I1212 00:34:08.470526  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:08.470855  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:08.970451  525066 type.go:168] "Request Body" body=""
	I1212 00:34:08.970523  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:08.970873  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:09.470384  525066 type.go:168] "Request Body" body=""
	I1212 00:34:09.470463  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:09.470759  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:09.470810  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:09.970476  525066 type.go:168] "Request Body" body=""
	I1212 00:34:09.970548  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:09.970914  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:10.470339  525066 type.go:168] "Request Body" body=""
	I1212 00:34:10.470412  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:10.470749  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:10.970418  525066 type.go:168] "Request Body" body=""
	I1212 00:34:10.970489  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:10.970837  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:11.470433  525066 type.go:168] "Request Body" body=""
	I1212 00:34:11.470511  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:11.470863  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:11.470914  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:11.970593  525066 type.go:168] "Request Body" body=""
	I1212 00:34:11.970672  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:11.971074  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:12.470513  525066 type.go:168] "Request Body" body=""
	I1212 00:34:12.470580  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:12.470938  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:12.970456  525066 type.go:168] "Request Body" body=""
	I1212 00:34:12.970537  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:12.970882  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:13.470574  525066 type.go:168] "Request Body" body=""
	I1212 00:34:13.470650  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:13.471032  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:13.471090  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:13.970419  525066 type.go:168] "Request Body" body=""
	I1212 00:34:13.970498  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:13.970791  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:14.470474  525066 type.go:168] "Request Body" body=""
	I1212 00:34:14.470543  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:14.470845  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:14.970750  525066 type.go:168] "Request Body" body=""
	I1212 00:34:14.970824  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:14.971143  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:15.470788  525066 type.go:168] "Request Body" body=""
	I1212 00:34:15.470856  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:15.471125  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:15.471166  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:15.970468  525066 type.go:168] "Request Body" body=""
	I1212 00:34:15.970542  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:15.970859  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:16.470484  525066 type.go:168] "Request Body" body=""
	I1212 00:34:16.470555  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:16.470882  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:16.970423  525066 type.go:168] "Request Body" body=""
	I1212 00:34:16.970496  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:16.970812  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:17.470487  525066 type.go:168] "Request Body" body=""
	I1212 00:34:17.470558  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:17.470975  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:17.970713  525066 type.go:168] "Request Body" body=""
	I1212 00:34:17.970796  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:17.971146  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:17.971201  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:18.470794  525066 type.go:168] "Request Body" body=""
	I1212 00:34:18.470865  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:18.471131  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:18.970459  525066 type.go:168] "Request Body" body=""
	I1212 00:34:18.970539  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:18.970892  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:19.470617  525066 type.go:168] "Request Body" body=""
	I1212 00:34:19.470700  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:19.471001  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:19.970879  525066 type.go:168] "Request Body" body=""
	I1212 00:34:19.970960  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:19.971231  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:19.971282  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:20.470446  525066 type.go:168] "Request Body" body=""
	I1212 00:34:20.470523  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:20.470854  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:20.970561  525066 type.go:168] "Request Body" body=""
	I1212 00:34:20.970634  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:20.970964  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:21.470548  525066 type.go:168] "Request Body" body=""
	I1212 00:34:21.470625  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:21.470970  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:21.970472  525066 type.go:168] "Request Body" body=""
	I1212 00:34:21.970545  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:21.970883  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:22.470450  525066 type.go:168] "Request Body" body=""
	I1212 00:34:22.470526  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:22.470863  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:22.470920  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:22.970409  525066 type.go:168] "Request Body" body=""
	I1212 00:34:22.970483  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:22.970826  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:23.470427  525066 type.go:168] "Request Body" body=""
	I1212 00:34:23.470503  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:23.470827  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:23.970523  525066 type.go:168] "Request Body" body=""
	I1212 00:34:23.970600  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:23.970934  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:24.470590  525066 type.go:168] "Request Body" body=""
	I1212 00:34:24.470656  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:24.470938  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:24.470979  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:24.971061  525066 type.go:168] "Request Body" body=""
	I1212 00:34:24.971133  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:24.971465  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:25.471283  525066 type.go:168] "Request Body" body=""
	I1212 00:34:25.471358  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:25.471678  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:25.970368  525066 type.go:168] "Request Body" body=""
	I1212 00:34:25.970434  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:25.970734  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:26.470429  525066 type.go:168] "Request Body" body=""
	I1212 00:34:26.470509  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:26.470892  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:26.970602  525066 type.go:168] "Request Body" body=""
	I1212 00:34:26.970714  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:26.971045  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:26.971097  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:27.470596  525066 type.go:168] "Request Body" body=""
	I1212 00:34:27.470664  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:27.470983  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:27.970426  525066 type.go:168] "Request Body" body=""
	I1212 00:34:27.970519  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:27.970812  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:28.470515  525066 type.go:168] "Request Body" body=""
	I1212 00:34:28.470605  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:28.470981  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:28.970522  525066 type.go:168] "Request Body" body=""
	I1212 00:34:28.970588  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:28.970857  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:29.470468  525066 type.go:168] "Request Body" body=""
	I1212 00:34:29.470544  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:29.470864  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:29.470919  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:29.970798  525066 type.go:168] "Request Body" body=""
	I1212 00:34:29.970869  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:29.971213  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:30.471137  525066 type.go:168] "Request Body" body=""
	I1212 00:34:30.471225  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:30.471550  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:30.971357  525066 type.go:168] "Request Body" body=""
	I1212 00:34:30.971431  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:30.971742  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:31.470472  525066 type.go:168] "Request Body" body=""
	I1212 00:34:31.470560  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:31.470967  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:31.471019  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:31.970675  525066 type.go:168] "Request Body" body=""
	I1212 00:34:31.970764  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:31.971052  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:32.470453  525066 type.go:168] "Request Body" body=""
	I1212 00:34:32.470527  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:32.470874  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:32.970593  525066 type.go:168] "Request Body" body=""
	I1212 00:34:32.970672  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:32.971032  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:33.470510  525066 type.go:168] "Request Body" body=""
	I1212 00:34:33.470583  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:33.470858  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:33.970552  525066 type.go:168] "Request Body" body=""
	I1212 00:34:33.970631  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:33.970999  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:33.971054  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:34.470581  525066 type.go:168] "Request Body" body=""
	I1212 00:34:34.470663  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:34.471029  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:34.970856  525066 type.go:168] "Request Body" body=""
	I1212 00:34:34.970934  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:34.971203  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:35.470956  525066 type.go:168] "Request Body" body=""
	I1212 00:34:35.471029  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:35.471364  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:35.971153  525066 type.go:168] "Request Body" body=""
	I1212 00:34:35.971231  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:35.971540  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:35.971597  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:36.471328  525066 type.go:168] "Request Body" body=""
	I1212 00:34:36.471400  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:36.471724  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:36.970408  525066 type.go:168] "Request Body" body=""
	I1212 00:34:36.970487  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:36.970849  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:37.470460  525066 type.go:168] "Request Body" body=""
	I1212 00:34:37.470535  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:37.470898  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:37.970425  525066 type.go:168] "Request Body" body=""
	I1212 00:34:37.970536  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:37.970843  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:38.470574  525066 type.go:168] "Request Body" body=""
	I1212 00:34:38.470652  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:38.470997  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:38.471051  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:38.970762  525066 type.go:168] "Request Body" body=""
	I1212 00:34:38.970837  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:38.971165  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:39.470821  525066 type.go:168] "Request Body" body=""
	I1212 00:34:39.470925  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:39.471276  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:39.971132  525066 type.go:168] "Request Body" body=""
	I1212 00:34:39.971208  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:39.971504  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:40.470403  525066 type.go:168] "Request Body" body=""
	I1212 00:34:40.470483  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:40.470859  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:40.970437  525066 type.go:168] "Request Body" body=""
	I1212 00:34:40.970509  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:40.970780  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:40.970828  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:41.470472  525066 type.go:168] "Request Body" body=""
	I1212 00:34:41.470541  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:41.471156  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:41.970992  525066 type.go:168] "Request Body" body=""
	I1212 00:34:41.971063  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:41.971400  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:42.471111  525066 type.go:168] "Request Body" body=""
	I1212 00:34:42.471182  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:42.471437  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:42.971249  525066 type.go:168] "Request Body" body=""
	I1212 00:34:42.971318  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:42.971637  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:42.971693  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:43.470373  525066 type.go:168] "Request Body" body=""
	I1212 00:34:43.470452  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:43.470770  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:43.970406  525066 type.go:168] "Request Body" body=""
	I1212 00:34:43.970484  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:43.970813  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:44.470444  525066 type.go:168] "Request Body" body=""
	I1212 00:34:44.470522  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:44.470871  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:44.970718  525066 type.go:168] "Request Body" body=""
	I1212 00:34:44.970796  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:44.971129  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:45.470791  525066 type.go:168] "Request Body" body=""
	I1212 00:34:45.470857  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:45.471157  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:45.471208  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:45.971058  525066 type.go:168] "Request Body" body=""
	I1212 00:34:45.971144  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:45.971575  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:46.471360  525066 type.go:168] "Request Body" body=""
	I1212 00:34:46.471430  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:46.471804  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:46.970483  525066 type.go:168] "Request Body" body=""
	I1212 00:34:46.970548  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:46.970854  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:47.470504  525066 type.go:168] "Request Body" body=""
	I1212 00:34:47.470579  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:47.470919  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:47.970616  525066 type.go:168] "Request Body" body=""
	I1212 00:34:47.970715  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:47.971061  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:47.971117  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:48.470377  525066 type.go:168] "Request Body" body=""
	I1212 00:34:48.470445  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:48.470744  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:48.970454  525066 type.go:168] "Request Body" body=""
	I1212 00:34:48.970525  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:48.970871  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:49.470443  525066 type.go:168] "Request Body" body=""
	I1212 00:34:49.470517  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:49.470842  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:49.970842  525066 type.go:168] "Request Body" body=""
	I1212 00:34:49.970921  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:49.971185  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:49.971235  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:50.471289  525066 type.go:168] "Request Body" body=""
	I1212 00:34:50.471363  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:50.471683  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:50.970385  525066 type.go:168] "Request Body" body=""
	I1212 00:34:50.970470  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:50.970820  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:51.470498  525066 type.go:168] "Request Body" body=""
	I1212 00:34:51.470577  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:51.470901  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:51.970643  525066 type.go:168] "Request Body" body=""
	I1212 00:34:51.970737  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:51.971081  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:52.470467  525066 type.go:168] "Request Body" body=""
	I1212 00:34:52.470543  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:52.470852  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:52.470906  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:52.970347  525066 type.go:168] "Request Body" body=""
	I1212 00:34:52.970413  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:52.970656  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:53.470361  525066 type.go:168] "Request Body" body=""
	I1212 00:34:53.470433  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:53.470758  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:53.970445  525066 type.go:168] "Request Body" body=""
	I1212 00:34:53.970521  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:53.970846  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:54.470385  525066 type.go:168] "Request Body" body=""
	I1212 00:34:54.470456  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:54.470728  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:54.970731  525066 type.go:168] "Request Body" body=""
	I1212 00:34:54.970808  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:54.971141  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:54.971194  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:55.470962  525066 type.go:168] "Request Body" body=""
	I1212 00:34:55.471032  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:55.471384  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:55.971164  525066 type.go:168] "Request Body" body=""
	I1212 00:34:55.971235  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:55.971578  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:56.471360  525066 type.go:168] "Request Body" body=""
	I1212 00:34:56.471429  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:56.471744  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:56.970433  525066 type.go:168] "Request Body" body=""
	I1212 00:34:56.970514  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:56.970841  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:57.470322  525066 type.go:168] "Request Body" body=""
	I1212 00:34:57.470393  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:57.470705  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:57.470754  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:57.970446  525066 type.go:168] "Request Body" body=""
	I1212 00:34:57.970517  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:57.970859  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:58.470584  525066 type.go:168] "Request Body" body=""
	I1212 00:34:58.470658  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:58.470977  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:58.970390  525066 type.go:168] "Request Body" body=""
	I1212 00:34:58.970459  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:58.970753  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:59.470435  525066 type.go:168] "Request Body" body=""
	I1212 00:34:59.470506  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:59.470847  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:59.470910  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:59.970877  525066 type.go:168] "Request Body" body=""
	I1212 00:34:59.970974  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:59.971302  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:00.471231  525066 type.go:168] "Request Body" body=""
	I1212 00:35:00.471314  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:00.471616  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:00.970336  525066 type.go:168] "Request Body" body=""
	I1212 00:35:00.970416  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:00.970781  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:01.470511  525066 type.go:168] "Request Body" body=""
	I1212 00:35:01.470600  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:01.470948  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:35:01.471002  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:35:01.970437  525066 type.go:168] "Request Body" body=""
	I1212 00:35:01.970523  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:01.970847  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:02.470535  525066 type.go:168] "Request Body" body=""
	I1212 00:35:02.470612  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:02.470975  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:02.970668  525066 type.go:168] "Request Body" body=""
	I1212 00:35:02.970763  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:02.971094  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:03.470416  525066 type.go:168] "Request Body" body=""
	I1212 00:35:03.470482  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:03.470744  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:03.970472  525066 type.go:168] "Request Body" body=""
	I1212 00:35:03.970553  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:03.970942  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:35:03.971000  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:35:04.470451  525066 type.go:168] "Request Body" body=""
	I1212 00:35:04.470558  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:04.470919  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:04.970804  525066 type.go:168] "Request Body" body=""
	I1212 00:35:04.970871  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:04.971144  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:05.470467  525066 type.go:168] "Request Body" body=""
	I1212 00:35:05.470537  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:05.470942  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:05.970499  525066 type.go:168] "Request Body" body=""
	I1212 00:35:05.970585  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:05.970938  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:06.470332  525066 type.go:168] "Request Body" body=""
	I1212 00:35:06.470408  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:06.470730  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:35:06.470781  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:35:06.970454  525066 type.go:168] "Request Body" body=""
	I1212 00:35:06.970525  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:06.970921  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:07.470487  525066 type.go:168] "Request Body" body=""
	I1212 00:35:07.470570  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:07.470917  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:07.970583  525066 type.go:168] "Request Body" body=""
	I1212 00:35:07.970653  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:07.970985  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:08.470669  525066 type.go:168] "Request Body" body=""
	I1212 00:35:08.470768  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:08.471111  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:35:08.471164  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:35:08.970674  525066 type.go:168] "Request Body" body=""
	I1212 00:35:08.970770  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:08.971117  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:09.470791  525066 type.go:168] "Request Body" body=""
	I1212 00:35:09.470928  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:09.471187  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:09.971168  525066 type.go:168] "Request Body" body=""
	I1212 00:35:09.971242  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:09.971558  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:10.470457  525066 type.go:168] "Request Body" body=""
	I1212 00:35:10.470566  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:10.470928  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:10.970414  525066 type.go:168] "Request Body" body=""
	I1212 00:35:10.970483  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:10.970810  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:35:10.970861  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:35:11.470561  525066 type.go:168] "Request Body" body=""
	I1212 00:35:11.470643  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:11.471038  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:11.970830  525066 type.go:168] "Request Body" body=""
	I1212 00:35:11.970907  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:11.971183  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:12.470991  525066 type.go:168] "Request Body" body=""
	I1212 00:35:12.471059  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:12.471390  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:12.971182  525066 type.go:168] "Request Body" body=""
	I1212 00:35:12.971260  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:12.971601  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:35:12.971680  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:35:13.471284  525066 type.go:168] "Request Body" body=""
	I1212 00:35:13.471356  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:13.471730  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:13.970398  525066 type.go:168] "Request Body" body=""
	I1212 00:35:13.970477  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:13.970795  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:14.470449  525066 type.go:168] "Request Body" body=""
	I1212 00:35:14.470529  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:14.470838  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:14.970781  525066 type.go:168] "Request Body" body=""
	I1212 00:35:14.970875  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:14.971268  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:15.471033  525066 type.go:168] "Request Body" body=""
	I1212 00:35:15.471104  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:15.471367  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:35:15.471407  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:35:15.971146  525066 type.go:168] "Request Body" body=""
	I1212 00:35:15.971216  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:15.971526  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:16.471298  525066 type.go:168] "Request Body" body=""
	I1212 00:35:16.471376  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:16.471748  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:16.970401  525066 type.go:168] "Request Body" body=""
	I1212 00:35:16.970468  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:16.970761  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:17.470431  525066 type.go:168] "Request Body" body=""
	I1212 00:35:17.470501  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:17.470854  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:17.970433  525066 type.go:168] "Request Body" body=""
	I1212 00:35:17.970504  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:17.970880  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:35:17.970936  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:35:18.470421  525066 type.go:168] "Request Body" body=""
	I1212 00:35:18.470491  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:18.470768  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:18.970447  525066 type.go:168] "Request Body" body=""
	I1212 00:35:18.970526  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:18.970872  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:19.470463  525066 type.go:168] "Request Body" body=""
	I1212 00:35:19.470546  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:19.470905  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:19.970857  525066 type.go:168] "Request Body" body=""
	I1212 00:35:19.970930  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:19.971189  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:35:19.971235  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:35:20.471222  525066 type.go:168] "Request Body" body=""
	I1212 00:35:20.471296  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:20.471592  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:20.971375  525066 type.go:168] "Request Body" body=""
	I1212 00:35:20.971447  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:20.971753  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:21.470418  525066 type.go:168] "Request Body" body=""
	I1212 00:35:21.470490  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:21.470805  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:21.970405  525066 type.go:168] "Request Body" body=""
	I1212 00:35:21.970484  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:21.970793  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:22.470400  525066 type.go:168] "Request Body" body=""
	I1212 00:35:22.470486  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:22.470834  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:35:22.470893  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:35:22.970432  525066 type.go:168] "Request Body" body=""
	I1212 00:35:22.970507  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:22.970864  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:23.470610  525066 type.go:168] "Request Body" body=""
	I1212 00:35:23.470694  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:23.471022  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:23.970472  525066 type.go:168] "Request Body" body=""
	I1212 00:35:23.970544  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:23.970934  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:24.470526  525066 type.go:168] "Request Body" body=""
	I1212 00:35:24.470602  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:24.470885  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:35:24.470937  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:35:24.970814  525066 type.go:168] "Request Body" body=""
	I1212 00:35:24.970900  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:24.971212  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:25.470981  525066 type.go:168] "Request Body" body=""
	I1212 00:35:25.471083  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:25.471412  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:25.971186  525066 type.go:168] "Request Body" body=""
	I1212 00:35:25.971270  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:25.971542  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:26.471296  525066 type.go:168] "Request Body" body=""
	I1212 00:35:26.471372  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:26.471691  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:35:26.471748  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:35:26.970425  525066 type.go:168] "Request Body" body=""
	I1212 00:35:26.970494  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:26.970788  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:27.470473  525066 type.go:168] "Request Body" body=""
	I1212 00:35:27.470545  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:27.470900  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:27.970608  525066 type.go:168] "Request Body" body=""
	I1212 00:35:27.970694  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:27.971049  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:28.470775  525066 type.go:168] "Request Body" body=""
	I1212 00:35:28.470856  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:28.471187  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:28.970958  525066 type.go:168] "Request Body" body=""
	I1212 00:35:28.971022  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:28.971277  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:35:28.971316  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:35:29.471162  525066 type.go:168] "Request Body" body=""
	I1212 00:35:29.471240  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:29.471593  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:29.970376  525066 type.go:168] "Request Body" body=""
	I1212 00:35:29.970454  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:29.970816  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:30.471109  525066 type.go:168] "Request Body" body=""
	I1212 00:35:30.471183  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:30.471480  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:30.971287  525066 type.go:168] "Request Body" body=""
	I1212 00:35:30.971360  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:30.971672  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:35:30.971729  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:35:31.470405  525066 type.go:168] "Request Body" body=""
	I1212 00:35:31.470485  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:31.470830  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:31.970549  525066 type.go:168] "Request Body" body=""
	I1212 00:35:31.970619  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:31.970957  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:32.470649  525066 type.go:168] "Request Body" body=""
	I1212 00:35:32.470745  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:32.471093  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:32.970460  525066 type.go:168] "Request Body" body=""
	I1212 00:35:32.970533  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:32.970861  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:33.470443  525066 type.go:168] "Request Body" body=""
	I1212 00:35:33.470511  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:33.470783  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:35:33.470825  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:35:33.970454  525066 type.go:168] "Request Body" body=""
	I1212 00:35:33.970535  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:33.970883  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:34.470595  525066 type.go:168] "Request Body" body=""
	I1212 00:35:34.470673  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:34.471021  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:34.970778  525066 type.go:168] "Request Body" body=""
	I1212 00:35:34.970845  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:34.971108  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:35.470789  525066 type.go:168] "Request Body" body=""
	I1212 00:35:35.470893  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:35.471408  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:35:35.471455  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:35:35.971178  525066 type.go:168] "Request Body" body=""
	I1212 00:35:35.971249  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:35.971545  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:36.471287  525066 type.go:168] "Request Body" body=""
	I1212 00:35:36.471358  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:36.471623  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:36.970386  525066 type.go:168] "Request Body" body=""
	I1212 00:35:36.970468  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:36.970815  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:37.470527  525066 type.go:168] "Request Body" body=""
	I1212 00:35:37.470612  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:37.470950  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:37.970440  525066 type.go:168] "Request Body" body=""
	I1212 00:35:37.970510  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:37.970824  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:35:37.970880  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:35:38.470422  525066 type.go:168] "Request Body" body=""
	I1212 00:35:38.470503  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:38.470828  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:38.970459  525066 type.go:168] "Request Body" body=""
	I1212 00:35:38.970529  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:38.970885  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:39.470567  525066 type.go:168] "Request Body" body=""
	I1212 00:35:39.470634  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:39.470915  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:39.971016  525066 type.go:168] "Request Body" body=""
	I1212 00:35:39.971090  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:39.971449  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:35:39.971507  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:35:40.470458  525066 type.go:168] "Request Body" body=""
	I1212 00:35:40.470535  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:40.470907  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:40.970388  525066 type.go:168] "Request Body" body=""
	I1212 00:35:40.970449  525066 node_ready.go:38] duration metric: took 6m0.000230679s for node "functional-035643" to be "Ready" ...
	I1212 00:35:40.973928  525066 out.go:203] 
	W1212 00:35:40.976747  525066 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1212 00:35:40.976773  525066 out.go:285] * 
	W1212 00:35:40.981440  525066 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 00:35:40.984739  525066 out.go:203] 
	
	
	==> CRI-O <==
	Dec 12 00:29:38 functional-035643 crio[5335]: time="2025-12-12T00:29:38.034109304Z" level=info msg="Using the internal default seccomp profile"
	Dec 12 00:29:38 functional-035643 crio[5335]: time="2025-12-12T00:29:38.034177635Z" level=info msg="AppArmor is disabled by the system or at CRI-O build-time"
	Dec 12 00:29:38 functional-035643 crio[5335]: time="2025-12-12T00:29:38.03422949Z" level=info msg="No blockio config file specified, blockio not configured"
	Dec 12 00:29:38 functional-035643 crio[5335]: time="2025-12-12T00:29:38.034284676Z" level=info msg="RDT not available in the host system"
	Dec 12 00:29:38 functional-035643 crio[5335]: time="2025-12-12T00:29:38.034352408Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 12 00:29:38 functional-035643 crio[5335]: time="2025-12-12T00:29:38.035250323Z" level=info msg="Conmon does support the --sync option"
	Dec 12 00:29:38 functional-035643 crio[5335]: time="2025-12-12T00:29:38.035364265Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 12 00:29:38 functional-035643 crio[5335]: time="2025-12-12T00:29:38.035430742Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 12 00:29:38 functional-035643 crio[5335]: time="2025-12-12T00:29:38.036202564Z" level=info msg="Conmon does support the --sync option"
	Dec 12 00:29:38 functional-035643 crio[5335]: time="2025-12-12T00:29:38.036293393Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 12 00:29:38 functional-035643 crio[5335]: time="2025-12-12T00:29:38.03648849Z" level=info msg="Updated default CNI network name to "
	Dec 12 00:29:38 functional-035643 crio[5335]: time="2025-12-12T00:29:38.037130698Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oc
i/hooks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"cgroupfs\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n
uid_mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_
memory = \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_d
ir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [c
rio.nri]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 12 00:29:38 functional-035643 crio[5335]: time="2025-12-12T00:29:38.037740481Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 12 00:29:38 functional-035643 crio[5335]: time="2025-12-12T00:29:38.037879998Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 12 00:29:38 functional-035643 crio[5335]: time="2025-12-12T00:29:38.09225968Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 12 00:29:38 functional-035643 crio[5335]: time="2025-12-12T00:29:38.09231892Z" level=info msg="Starting seccomp notifier watcher"
	Dec 12 00:29:38 functional-035643 crio[5335]: time="2025-12-12T00:29:38.092389244Z" level=info msg="Create NRI interface"
	Dec 12 00:29:38 functional-035643 crio[5335]: time="2025-12-12T00:29:38.092551029Z" level=info msg="built-in NRI default validator is disabled"
	Dec 12 00:29:38 functional-035643 crio[5335]: time="2025-12-12T00:29:38.092568366Z" level=info msg="runtime interface created"
	Dec 12 00:29:38 functional-035643 crio[5335]: time="2025-12-12T00:29:38.092583759Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 12 00:29:38 functional-035643 crio[5335]: time="2025-12-12T00:29:38.092594753Z" level=info msg="runtime interface starting up..."
	Dec 12 00:29:38 functional-035643 crio[5335]: time="2025-12-12T00:29:38.092601407Z" level=info msg="starting plugins..."
	Dec 12 00:29:38 functional-035643 crio[5335]: time="2025-12-12T00:29:38.092616291Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 12 00:29:38 functional-035643 crio[5335]: time="2025-12-12T00:29:38.092695756Z" level=info msg="No systemd watchdog enabled"
	Dec 12 00:29:38 functional-035643 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:35:42.991971    8561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:35:42.992354    8561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:35:42.993936    8561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:35:42.994414    8561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:35:42.996469    8561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec11 23:45] hrtimer: interrupt took 13740716 ns
	[Dec12 00:10] kauditd_printk_skb: 8 callbacks suppressed
	[Dec12 00:11] overlayfs: idmapped layers are currently not supported
	[  +0.124336] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec12 00:17] overlayfs: idmapped layers are currently not supported
	[Dec12 00:18] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 00:35:43 up  3:18,  0 user,  load average: 0.21, 0.30, 0.79
	Linux functional-035643 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 12 00:35:40 functional-035643 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 00:35:40 functional-035643 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1138.
	Dec 12 00:35:40 functional-035643 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:35:40 functional-035643 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:35:41 functional-035643 kubelet[8447]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 00:35:41 functional-035643 kubelet[8447]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 00:35:41 functional-035643 kubelet[8447]: E1212 00:35:41.067812    8447 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 00:35:41 functional-035643 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 00:35:41 functional-035643 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 00:35:41 functional-035643 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1139.
	Dec 12 00:35:41 functional-035643 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:35:41 functional-035643 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:35:41 functional-035643 kubelet[8453]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 00:35:41 functional-035643 kubelet[8453]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 00:35:41 functional-035643 kubelet[8453]: E1212 00:35:41.784444    8453 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 00:35:41 functional-035643 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 00:35:41 functional-035643 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 00:35:42 functional-035643 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1140.
	Dec 12 00:35:42 functional-035643 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:35:42 functional-035643 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:35:42 functional-035643 kubelet[8474]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 00:35:42 functional-035643 kubelet[8474]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 00:35:42 functional-035643 kubelet[8474]: E1212 00:35:42.527893    8474 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 00:35:42 functional-035643 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 00:35:42 functional-035643 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-035643 -n functional-035643
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-035643 -n functional-035643: exit status 2 (380.253111ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-035643" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (369.46s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (2.53s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-035643 get po -A
functional_test.go:711: (dbg) Non-zero exit: kubectl --context functional-035643 get po -A: exit status 1 (64.998198ms)

                                                
                                                
** stderr ** 
	E1212 00:35:44.362010  529093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1212 00:35:44.363705  529093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1212 00:35:44.365238  529093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1212 00:35:44.366844  529093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1212 00:35:44.368361  529093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:713: failed to get kubectl pods: args "kubectl --context functional-035643 get po -A" : exit status 1
functional_test.go:717: expected stderr to be empty but got *"E1212 00:35:44.362010  529093 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://192.168.49.2:8441/api?timeout=32s\\\": dial tcp 192.168.49.2:8441: connect: connection refused\"\nE1212 00:35:44.363705  529093 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://192.168.49.2:8441/api?timeout=32s\\\": dial tcp 192.168.49.2:8441: connect: connection refused\"\nE1212 00:35:44.365238  529093 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://192.168.49.2:8441/api?timeout=32s\\\": dial tcp 192.168.49.2:8441: connect: connection refused\"\nE1212 00:35:44.366844  529093 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://192.168.49.2:8441/api?timeout=32s\\\": dial tcp 192.168.49.2:8441: connect: connection refused\"\nE1212 00:35:44.368361  529093 memc
ache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://192.168.49.2:8441/api?timeout=32s\\\": dial tcp 192.168.49.2:8441: connect: connection refused\"\nThe connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?\n"*: args "kubectl --context functional-035643 get po -A"
functional_test.go:720: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-035643 get po -A"
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-035643
helpers_test.go:244: (dbg) docker inspect functional-035643:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "02b8c8e636a5f9c9930bd279101be257c50eb00805c3c8fd0285e10206ff115a",
	        "Created": "2025-12-12T00:21:16.539894649Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 519641,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T00:21:16.600605162Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/02b8c8e636a5f9c9930bd279101be257c50eb00805c3c8fd0285e10206ff115a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/02b8c8e636a5f9c9930bd279101be257c50eb00805c3c8fd0285e10206ff115a/hostname",
	        "HostsPath": "/var/lib/docker/containers/02b8c8e636a5f9c9930bd279101be257c50eb00805c3c8fd0285e10206ff115a/hosts",
	        "LogPath": "/var/lib/docker/containers/02b8c8e636a5f9c9930bd279101be257c50eb00805c3c8fd0285e10206ff115a/02b8c8e636a5f9c9930bd279101be257c50eb00805c3c8fd0285e10206ff115a-json.log",
	        "Name": "/functional-035643",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-035643:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-035643",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "02b8c8e636a5f9c9930bd279101be257c50eb00805c3c8fd0285e10206ff115a",
	                "LowerDir": "/var/lib/docker/overlay2/fac4f7ad025123b29aaa4f718217dd9d72b542fc9949b8afc2ffc326afc38542-init/diff:/var/lib/docker/overlay2/312acdcca8c5c90ada236fa0dd866f841348e5b8485928af37d3628cccc20197/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fac4f7ad025123b29aaa4f718217dd9d72b542fc9949b8afc2ffc326afc38542/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fac4f7ad025123b29aaa4f718217dd9d72b542fc9949b8afc2ffc326afc38542/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fac4f7ad025123b29aaa4f718217dd9d72b542fc9949b8afc2ffc326afc38542/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-035643",
	                "Source": "/var/lib/docker/volumes/functional-035643/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-035643",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-035643",
	                "name.minikube.sigs.k8s.io": "functional-035643",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ede6a17442d6bf83b8f4c9f93f252345cec3d0406f82de2d6bd2cfd4713e2163",
	            "SandboxKey": "/var/run/docker/netns/ede6a17442d6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33183"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33184"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33187"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33185"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33186"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-035643": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "52:d5:12:89:ea:40",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ad01995b183fdebead6c725e2b942ae8ce2d3964b3552789fe5b50ee7e7239a3",
	                    "EndpointID": "d429a1cd0f840d042af4ad7ea0bda6067a342be7fb552083411004a3604b0124",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-035643",
	                        "02b8c8e636a5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-035643 -n functional-035643
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-035643 -n functional-035643: exit status 2 (303.881084ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-035643 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p functional-035643 logs -n 25: (1.175661064s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-921447 image rm kicbase/echo-server:functional-921447 --alsologtostderr                                                                │ functional-921447 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ image          │ functional-921447 image ls                                                                                                                        │ functional-921447 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ ssh            │ functional-921447 ssh sudo cat /etc/test/nested/copy/490954/hosts                                                                                 │ functional-921447 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ image          │ functional-921447 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr                               │ functional-921447 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ ssh            │ functional-921447 ssh sudo cat /etc/ssl/certs/490954.pem                                                                                          │ functional-921447 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ image          │ functional-921447 image ls                                                                                                                        │ functional-921447 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ ssh            │ functional-921447 ssh sudo cat /usr/share/ca-certificates/490954.pem                                                                              │ functional-921447 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ image          │ functional-921447 image save --daemon kicbase/echo-server:functional-921447 --alsologtostderr                                                     │ functional-921447 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ ssh            │ functional-921447 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                          │ functional-921447 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ ssh            │ functional-921447 ssh sudo cat /etc/ssl/certs/4909542.pem                                                                                         │ functional-921447 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ ssh            │ functional-921447 ssh sudo cat /usr/share/ca-certificates/4909542.pem                                                                             │ functional-921447 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ ssh            │ functional-921447 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                          │ functional-921447 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ update-context │ functional-921447 update-context --alsologtostderr -v=2                                                                                           │ functional-921447 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ update-context │ functional-921447 update-context --alsologtostderr -v=2                                                                                           │ functional-921447 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ update-context │ functional-921447 update-context --alsologtostderr -v=2                                                                                           │ functional-921447 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ image          │ functional-921447 image ls --format yaml --alsologtostderr                                                                                        │ functional-921447 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ image          │ functional-921447 image ls --format short --alsologtostderr                                                                                       │ functional-921447 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ image          │ functional-921447 image ls --format json --alsologtostderr                                                                                        │ functional-921447 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ ssh            │ functional-921447 ssh pgrep buildkitd                                                                                                             │ functional-921447 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │                     │
	│ image          │ functional-921447 image ls --format table --alsologtostderr                                                                                       │ functional-921447 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ image          │ functional-921447 image build -t localhost/my-image:functional-921447 testdata/build --alsologtostderr                                            │ functional-921447 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ image          │ functional-921447 image ls                                                                                                                        │ functional-921447 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ delete         │ -p functional-921447                                                                                                                              │ functional-921447 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ start          │ -p functional-035643 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │                     │
	│ start          │ -p functional-035643 --alsologtostderr -v=8                                                                                                       │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:29 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 00:29:34
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 00:29:34.833608  525066 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:29:34.833799  525066 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:29:34.833830  525066 out.go:374] Setting ErrFile to fd 2...
	I1212 00:29:34.833859  525066 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:29:34.834244  525066 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-487723/.minikube/bin
	I1212 00:29:34.834787  525066 out.go:368] Setting JSON to false
	I1212 00:29:34.835727  525066 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":11520,"bootTime":1765487855,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1212 00:29:34.836335  525066 start.go:143] virtualization:  
	I1212 00:29:34.841302  525066 out.go:179] * [functional-035643] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 00:29:34.846669  525066 out.go:179]   - MINIKUBE_LOCATION=22101
	I1212 00:29:34.846785  525066 notify.go:221] Checking for updates...
	I1212 00:29:34.852399  525066 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:29:34.855222  525066 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22101-487723/kubeconfig
	I1212 00:29:34.857924  525066 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-487723/.minikube
	I1212 00:29:34.860585  525066 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 00:29:34.863145  525066 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:29:34.866639  525066 config.go:182] Loaded profile config "functional-035643": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 00:29:34.866818  525066 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 00:29:34.892569  525066 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 00:29:34.892680  525066 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:29:34.954074  525066 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 00:29:34.944774098 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 00:29:34.954186  525066 docker.go:319] overlay module found
	I1212 00:29:34.958427  525066 out.go:179] * Using the docker driver based on existing profile
	I1212 00:29:34.960983  525066 start.go:309] selected driver: docker
	I1212 00:29:34.961005  525066 start.go:927] validating driver "docker" against &{Name:functional-035643 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-035643 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:29:34.961104  525066 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:29:34.961212  525066 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:29:35.019269  525066 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 00:29:35.008770771 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 00:29:35.019716  525066 cni.go:84] Creating CNI manager for ""
	I1212 00:29:35.019778  525066 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 00:29:35.019842  525066 start.go:353] cluster config:
	{Name:functional-035643 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-035643 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:29:35.022879  525066 out.go:179] * Starting "functional-035643" primary control-plane node in "functional-035643" cluster
	I1212 00:29:35.025659  525066 cache.go:134] Beginning downloading kic base image for docker with crio
	I1212 00:29:35.028463  525066 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1212 00:29:35.031434  525066 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1212 00:29:35.031495  525066 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22101-487723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1212 00:29:35.031510  525066 cache.go:65] Caching tarball of preloaded images
	I1212 00:29:35.031544  525066 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1212 00:29:35.031603  525066 preload.go:238] Found /home/jenkins/minikube-integration/22101-487723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1212 00:29:35.031614  525066 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1212 00:29:35.031729  525066 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/config.json ...
	I1212 00:29:35.051219  525066 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1212 00:29:35.051245  525066 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1212 00:29:35.051267  525066 cache.go:243] Successfully downloaded all kic artifacts
	I1212 00:29:35.051303  525066 start.go:360] acquireMachinesLock for functional-035643: {Name:mkb0cdc7d354412594dc63c0234fde00134e758d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 00:29:35.051387  525066 start.go:364] duration metric: took 54.908µs to acquireMachinesLock for "functional-035643"
	I1212 00:29:35.051416  525066 start.go:96] Skipping create...Using existing machine configuration
	I1212 00:29:35.051428  525066 fix.go:54] fixHost starting: 
	I1212 00:29:35.051696  525066 cli_runner.go:164] Run: docker container inspect functional-035643 --format={{.State.Status}}
	I1212 00:29:35.069320  525066 fix.go:112] recreateIfNeeded on functional-035643: state=Running err=<nil>
	W1212 00:29:35.069352  525066 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 00:29:35.072554  525066 out.go:252] * Updating the running docker "functional-035643" container ...
	I1212 00:29:35.072600  525066 machine.go:94] provisionDockerMachine start ...
	I1212 00:29:35.072693  525066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
	I1212 00:29:35.090330  525066 main.go:143] libmachine: Using SSH client type: native
	I1212 00:29:35.090669  525066 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1212 00:29:35.090706  525066 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 00:29:35.238363  525066 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-035643
	
	I1212 00:29:35.238387  525066 ubuntu.go:182] provisioning hostname "functional-035643"
	I1212 00:29:35.238453  525066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
	I1212 00:29:35.256201  525066 main.go:143] libmachine: Using SSH client type: native
	I1212 00:29:35.256511  525066 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1212 00:29:35.256528  525066 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-035643 && echo "functional-035643" | sudo tee /etc/hostname
	I1212 00:29:35.418094  525066 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-035643
	
	I1212 00:29:35.418176  525066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
	I1212 00:29:35.436164  525066 main.go:143] libmachine: Using SSH client type: native
	I1212 00:29:35.436475  525066 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1212 00:29:35.436494  525066 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-035643' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-035643/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-035643' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 00:29:35.594938  525066 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 00:29:35.594969  525066 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22101-487723/.minikube CaCertPath:/home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22101-487723/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22101-487723/.minikube}
	I1212 00:29:35.595009  525066 ubuntu.go:190] setting up certificates
	I1212 00:29:35.595026  525066 provision.go:84] configureAuth start
	I1212 00:29:35.595111  525066 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-035643
	I1212 00:29:35.612398  525066 provision.go:143] copyHostCerts
	I1212 00:29:35.612439  525066 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22101-487723/.minikube/ca.pem
	I1212 00:29:35.612482  525066 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-487723/.minikube/ca.pem, removing ...
	I1212 00:29:35.612494  525066 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-487723/.minikube/ca.pem
	I1212 00:29:35.612571  525066 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22101-487723/.minikube/ca.pem (1078 bytes)
	I1212 00:29:35.612671  525066 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22101-487723/.minikube/cert.pem
	I1212 00:29:35.612699  525066 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-487723/.minikube/cert.pem, removing ...
	I1212 00:29:35.612707  525066 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-487723/.minikube/cert.pem
	I1212 00:29:35.612734  525066 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22101-487723/.minikube/cert.pem (1123 bytes)
	I1212 00:29:35.612781  525066 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22101-487723/.minikube/key.pem
	I1212 00:29:35.612802  525066 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-487723/.minikube/key.pem, removing ...
	I1212 00:29:35.612813  525066 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-487723/.minikube/key.pem
	I1212 00:29:35.612837  525066 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22101-487723/.minikube/key.pem (1679 bytes)
	I1212 00:29:35.612889  525066 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22101-487723/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca-key.pem org=jenkins.functional-035643 san=[127.0.0.1 192.168.49.2 functional-035643 localhost minikube]
	I1212 00:29:35.977748  525066 provision.go:177] copyRemoteCerts
	I1212 00:29:35.977818  525066 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 00:29:35.977857  525066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
	I1212 00:29:35.995348  525066 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/functional-035643/id_rsa Username:docker}
	I1212 00:29:36.106772  525066 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1212 00:29:36.106859  525066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 00:29:36.126035  525066 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1212 00:29:36.126112  525066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 00:29:36.143996  525066 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1212 00:29:36.144114  525066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 00:29:36.161387  525066 provision.go:87] duration metric: took 566.343959ms to configureAuth
	I1212 00:29:36.161415  525066 ubuntu.go:206] setting minikube options for container-runtime
	I1212 00:29:36.161612  525066 config.go:182] Loaded profile config "functional-035643": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 00:29:36.161722  525066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
	I1212 00:29:36.179565  525066 main.go:143] libmachine: Using SSH client type: native
	I1212 00:29:36.179872  525066 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1212 00:29:36.179896  525066 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 00:29:36.525259  525066 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 00:29:36.525285  525066 machine.go:97] duration metric: took 1.45267532s to provisionDockerMachine
	I1212 00:29:36.525297  525066 start.go:293] postStartSetup for "functional-035643" (driver="docker")
	I1212 00:29:36.525310  525066 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 00:29:36.525385  525066 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 00:29:36.525432  525066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
	I1212 00:29:36.544323  525066 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/functional-035643/id_rsa Username:docker}
	I1212 00:29:36.650745  525066 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 00:29:36.654027  525066 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1212 00:29:36.654058  525066 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1212 00:29:36.654063  525066 command_runner.go:130] > VERSION_ID="12"
	I1212 00:29:36.654067  525066 command_runner.go:130] > VERSION="12 (bookworm)"
	I1212 00:29:36.654072  525066 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1212 00:29:36.654076  525066 command_runner.go:130] > ID=debian
	I1212 00:29:36.654081  525066 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1212 00:29:36.654086  525066 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1212 00:29:36.654098  525066 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1212 00:29:36.654164  525066 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 00:29:36.654184  525066 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 00:29:36.654203  525066 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-487723/.minikube/addons for local assets ...
	I1212 00:29:36.654261  525066 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-487723/.minikube/files for local assets ...
	I1212 00:29:36.654368  525066 filesync.go:149] local asset: /home/jenkins/minikube-integration/22101-487723/.minikube/files/etc/ssl/certs/4909542.pem -> 4909542.pem in /etc/ssl/certs
	I1212 00:29:36.654379  525066 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/files/etc/ssl/certs/4909542.pem -> /etc/ssl/certs/4909542.pem
	I1212 00:29:36.654462  525066 filesync.go:149] local asset: /home/jenkins/minikube-integration/22101-487723/.minikube/files/etc/test/nested/copy/490954/hosts -> hosts in /etc/test/nested/copy/490954
	I1212 00:29:36.654470  525066 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/files/etc/test/nested/copy/490954/hosts -> /etc/test/nested/copy/490954/hosts
	I1212 00:29:36.654523  525066 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/490954
	I1212 00:29:36.661942  525066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/files/etc/ssl/certs/4909542.pem --> /etc/ssl/certs/4909542.pem (1708 bytes)
	I1212 00:29:36.678936  525066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/files/etc/test/nested/copy/490954/hosts --> /etc/test/nested/copy/490954/hosts (40 bytes)
	I1212 00:29:36.696209  525066 start.go:296] duration metric: took 170.896684ms for postStartSetup
	I1212 00:29:36.696330  525066 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 00:29:36.696401  525066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
	I1212 00:29:36.716202  525066 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/functional-035643/id_rsa Username:docker}
	I1212 00:29:36.819154  525066 command_runner.go:130] > 18%
	I1212 00:29:36.819742  525066 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 00:29:36.823869  525066 command_runner.go:130] > 160G
	I1212 00:29:36.824320  525066 fix.go:56] duration metric: took 1.772888094s for fixHost
	I1212 00:29:36.824342  525066 start.go:83] releasing machines lock for "functional-035643", held for 1.772938226s
	I1212 00:29:36.824419  525066 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-035643
	I1212 00:29:36.841414  525066 ssh_runner.go:195] Run: cat /version.json
	I1212 00:29:36.841444  525066 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 00:29:36.841465  525066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
	I1212 00:29:36.841499  525066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
	I1212 00:29:36.858975  525066 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/functional-035643/id_rsa Username:docker}
	I1212 00:29:36.864277  525066 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/functional-035643/id_rsa Username:docker}
	I1212 00:29:37.063000  525066 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1212 00:29:37.063067  525066 command_runner.go:130] > {"iso_version": "v1.37.0-1765151505-21409", "kicbase_version": "v0.0.48-1765275396-22083", "minikube_version": "v1.37.0", "commit": "9f3959633d311997d75aab86f8ff840f224c6486"}
	I1212 00:29:37.063223  525066 ssh_runner.go:195] Run: systemctl --version
	I1212 00:29:37.069375  525066 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1212 00:29:37.069421  525066 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1212 00:29:37.069789  525066 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 00:29:37.107153  525066 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1212 00:29:37.111099  525066 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1212 00:29:37.111476  525066 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 00:29:37.111538  525066 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 00:29:37.119321  525066 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 00:29:37.119346  525066 start.go:496] detecting cgroup driver to use...
	I1212 00:29:37.119377  525066 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 00:29:37.119429  525066 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 00:29:37.134288  525066 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 00:29:37.147114  525066 docker.go:218] disabling cri-docker service (if available) ...
	I1212 00:29:37.147210  525066 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 00:29:37.162260  525066 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 00:29:37.175226  525066 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 00:29:37.287755  525066 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 00:29:37.404746  525066 docker.go:234] disabling docker service ...
	I1212 00:29:37.404828  525066 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 00:29:37.419834  525066 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 00:29:37.433027  525066 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 00:29:37.553874  525066 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 00:29:37.677379  525066 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 00:29:37.696856  525066 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 00:29:37.711415  525066 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1212 00:29:37.712568  525066 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1212 00:29:37.712642  525066 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:29:37.724126  525066 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 00:29:37.724197  525066 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:29:37.733568  525066 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:29:37.743368  525066 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:29:37.752442  525066 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 00:29:37.761570  525066 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:29:37.771444  525066 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:29:37.780014  525066 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:29:37.788901  525066 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 00:29:37.795786  525066 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1212 00:29:37.796743  525066 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 00:29:37.804315  525066 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:29:37.916494  525066 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 00:29:38.098236  525066 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 00:29:38.098362  525066 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 00:29:38.102398  525066 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1212 00:29:38.102430  525066 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1212 00:29:38.102438  525066 command_runner.go:130] > Device: 0,72	Inode: 1642        Links: 1
	I1212 00:29:38.102445  525066 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 00:29:38.102451  525066 command_runner.go:130] > Access: 2025-12-12 00:29:38.034542795 +0000
	I1212 00:29:38.102458  525066 command_runner.go:130] > Modify: 2025-12-12 00:29:38.034542795 +0000
	I1212 00:29:38.102463  525066 command_runner.go:130] > Change: 2025-12-12 00:29:38.034542795 +0000
	I1212 00:29:38.102467  525066 command_runner.go:130] >  Birth: -
	I1212 00:29:38.102500  525066 start.go:564] Will wait 60s for crictl version
	I1212 00:29:38.102554  525066 ssh_runner.go:195] Run: which crictl
	I1212 00:29:38.105961  525066 command_runner.go:130] > /usr/local/bin/crictl
	I1212 00:29:38.106209  525066 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 00:29:38.130147  525066 command_runner.go:130] > Version:  0.1.0
	I1212 00:29:38.130215  525066 command_runner.go:130] > RuntimeName:  cri-o
	I1212 00:29:38.130236  525066 command_runner.go:130] > RuntimeVersion:  1.34.3
	I1212 00:29:38.130255  525066 command_runner.go:130] > RuntimeApiVersion:  v1
	I1212 00:29:38.130299  525066 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1212 00:29:38.130400  525066 ssh_runner.go:195] Run: crio --version
	I1212 00:29:38.156955  525066 command_runner.go:130] > crio version 1.34.3
	I1212 00:29:38.157026  525066 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1212 00:29:38.157055  525066 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1212 00:29:38.157075  525066 command_runner.go:130] >    GitTreeState:   dirty
	I1212 00:29:38.157101  525066 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1212 00:29:38.157125  525066 command_runner.go:130] >    GoVersion:      go1.24.6
	I1212 00:29:38.157142  525066 command_runner.go:130] >    Compiler:       gc
	I1212 00:29:38.157162  525066 command_runner.go:130] >    Platform:       linux/arm64
	I1212 00:29:38.157188  525066 command_runner.go:130] >    Linkmode:       static
	I1212 00:29:38.157205  525066 command_runner.go:130] >    BuildTags:
	I1212 00:29:38.157231  525066 command_runner.go:130] >      static
	I1212 00:29:38.157260  525066 command_runner.go:130] >      netgo
	I1212 00:29:38.157278  525066 command_runner.go:130] >      osusergo
	I1212 00:29:38.157296  525066 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1212 00:29:38.157309  525066 command_runner.go:130] >      seccomp
	I1212 00:29:38.157334  525066 command_runner.go:130] >      apparmor
	I1212 00:29:38.157350  525066 command_runner.go:130] >      selinux
	I1212 00:29:38.157366  525066 command_runner.go:130] >    LDFlags:          unknown
	I1212 00:29:38.157384  525066 command_runner.go:130] >    SeccompEnabled:   true
	I1212 00:29:38.157415  525066 command_runner.go:130] >    AppArmorEnabled:  false
	I1212 00:29:38.159818  525066 ssh_runner.go:195] Run: crio --version
	I1212 00:29:38.187365  525066 command_runner.go:130] > crio version 1.34.3
	I1212 00:29:38.187391  525066 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1212 00:29:38.187398  525066 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1212 00:29:38.187403  525066 command_runner.go:130] >    GitTreeState:   dirty
	I1212 00:29:38.187408  525066 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1212 00:29:38.187414  525066 command_runner.go:130] >    GoVersion:      go1.24.6
	I1212 00:29:38.187418  525066 command_runner.go:130] >    Compiler:       gc
	I1212 00:29:38.187438  525066 command_runner.go:130] >    Platform:       linux/arm64
	I1212 00:29:38.187447  525066 command_runner.go:130] >    Linkmode:       static
	I1212 00:29:38.187451  525066 command_runner.go:130] >    BuildTags:
	I1212 00:29:38.187455  525066 command_runner.go:130] >      static
	I1212 00:29:38.187459  525066 command_runner.go:130] >      netgo
	I1212 00:29:38.187463  525066 command_runner.go:130] >      osusergo
	I1212 00:29:38.187468  525066 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1212 00:29:38.187481  525066 command_runner.go:130] >      seccomp
	I1212 00:29:38.187489  525066 command_runner.go:130] >      apparmor
	I1212 00:29:38.187494  525066 command_runner.go:130] >      selinux
	I1212 00:29:38.187502  525066 command_runner.go:130] >    LDFlags:          unknown
	I1212 00:29:38.187507  525066 command_runner.go:130] >    SeccompEnabled:   true
	I1212 00:29:38.187511  525066 command_runner.go:130] >    AppArmorEnabled:  false
	I1212 00:29:38.193058  525066 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1212 00:29:38.195137  525066 cli_runner.go:164] Run: docker network inspect functional-035643 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 00:29:38.211553  525066 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1212 00:29:38.215227  525066 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1212 00:29:38.215507  525066 kubeadm.go:884] updating cluster {Name:functional-035643 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-035643 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 00:29:38.215633  525066 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1212 00:29:38.215688  525066 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:29:38.248801  525066 command_runner.go:130] > {
	I1212 00:29:38.248822  525066 command_runner.go:130] >   "images":  [
	I1212 00:29:38.248827  525066 command_runner.go:130] >     {
	I1212 00:29:38.248837  525066 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1212 00:29:38.248842  525066 command_runner.go:130] >       "repoTags":  [
	I1212 00:29:38.248851  525066 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1212 00:29:38.248855  525066 command_runner.go:130] >       ],
	I1212 00:29:38.248859  525066 command_runner.go:130] >       "repoDigests":  [
	I1212 00:29:38.248869  525066 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1212 00:29:38.248877  525066 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1212 00:29:38.248880  525066 command_runner.go:130] >       ],
	I1212 00:29:38.248885  525066 command_runner.go:130] >       "size":  "111333938",
	I1212 00:29:38.248893  525066 command_runner.go:130] >       "username":  "",
	I1212 00:29:38.248898  525066 command_runner.go:130] >       "pinned":  false
	I1212 00:29:38.248901  525066 command_runner.go:130] >     },
	I1212 00:29:38.248905  525066 command_runner.go:130] >     {
	I1212 00:29:38.248911  525066 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1212 00:29:38.248926  525066 command_runner.go:130] >       "repoTags":  [
	I1212 00:29:38.248931  525066 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1212 00:29:38.248935  525066 command_runner.go:130] >       ],
	I1212 00:29:38.248939  525066 command_runner.go:130] >       "repoDigests":  [
	I1212 00:29:38.248951  525066 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1212 00:29:38.248960  525066 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1212 00:29:38.248967  525066 command_runner.go:130] >       ],
	I1212 00:29:38.248971  525066 command_runner.go:130] >       "size":  "29037500",
	I1212 00:29:38.248975  525066 command_runner.go:130] >       "username":  "",
	I1212 00:29:38.248983  525066 command_runner.go:130] >       "pinned":  false
	I1212 00:29:38.248987  525066 command_runner.go:130] >     },
	I1212 00:29:38.248990  525066 command_runner.go:130] >     {
	I1212 00:29:38.248998  525066 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1212 00:29:38.249004  525066 command_runner.go:130] >       "repoTags":  [
	I1212 00:29:38.249018  525066 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1212 00:29:38.249026  525066 command_runner.go:130] >       ],
	I1212 00:29:38.249036  525066 command_runner.go:130] >       "repoDigests":  [
	I1212 00:29:38.249044  525066 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1212 00:29:38.249058  525066 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1212 00:29:38.249061  525066 command_runner.go:130] >       ],
	I1212 00:29:38.249065  525066 command_runner.go:130] >       "size":  "74491780",
	I1212 00:29:38.249070  525066 command_runner.go:130] >       "username":  "nonroot",
	I1212 00:29:38.249073  525066 command_runner.go:130] >       "pinned":  false
	I1212 00:29:38.249080  525066 command_runner.go:130] >     },
	I1212 00:29:38.249083  525066 command_runner.go:130] >     {
	I1212 00:29:38.249093  525066 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1212 00:29:38.249104  525066 command_runner.go:130] >       "repoTags":  [
	I1212 00:29:38.249109  525066 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1212 00:29:38.249112  525066 command_runner.go:130] >       ],
	I1212 00:29:38.249116  525066 command_runner.go:130] >       "repoDigests":  [
	I1212 00:29:38.249125  525066 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1212 00:29:38.249135  525066 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1212 00:29:38.249139  525066 command_runner.go:130] >       ],
	I1212 00:29:38.249142  525066 command_runner.go:130] >       "size":  "60857170",
	I1212 00:29:38.249146  525066 command_runner.go:130] >       "uid":  {
	I1212 00:29:38.249150  525066 command_runner.go:130] >         "value":  "0"
	I1212 00:29:38.249153  525066 command_runner.go:130] >       },
	I1212 00:29:38.249166  525066 command_runner.go:130] >       "username":  "",
	I1212 00:29:38.249173  525066 command_runner.go:130] >       "pinned":  false
	I1212 00:29:38.249177  525066 command_runner.go:130] >     },
	I1212 00:29:38.249179  525066 command_runner.go:130] >     {
	I1212 00:29:38.249186  525066 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1212 00:29:38.249192  525066 command_runner.go:130] >       "repoTags":  [
	I1212 00:29:38.249197  525066 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1212 00:29:38.249201  525066 command_runner.go:130] >       ],
	I1212 00:29:38.249205  525066 command_runner.go:130] >       "repoDigests":  [
	I1212 00:29:38.249215  525066 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1212 00:29:38.249230  525066 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1212 00:29:38.249234  525066 command_runner.go:130] >       ],
	I1212 00:29:38.249241  525066 command_runner.go:130] >       "size":  "84949999",
	I1212 00:29:38.249245  525066 command_runner.go:130] >       "uid":  {
	I1212 00:29:38.249249  525066 command_runner.go:130] >         "value":  "0"
	I1212 00:29:38.249254  525066 command_runner.go:130] >       },
	I1212 00:29:38.249259  525066 command_runner.go:130] >       "username":  "",
	I1212 00:29:38.249263  525066 command_runner.go:130] >       "pinned":  false
	I1212 00:29:38.249268  525066 command_runner.go:130] >     },
	I1212 00:29:38.249272  525066 command_runner.go:130] >     {
	I1212 00:29:38.249281  525066 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1212 00:29:38.249294  525066 command_runner.go:130] >       "repoTags":  [
	I1212 00:29:38.249301  525066 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1212 00:29:38.249304  525066 command_runner.go:130] >       ],
	I1212 00:29:38.249308  525066 command_runner.go:130] >       "repoDigests":  [
	I1212 00:29:38.249317  525066 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1212 00:29:38.249326  525066 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1212 00:29:38.249337  525066 command_runner.go:130] >       ],
	I1212 00:29:38.249341  525066 command_runner.go:130] >       "size":  "72170325",
	I1212 00:29:38.249345  525066 command_runner.go:130] >       "uid":  {
	I1212 00:29:38.249348  525066 command_runner.go:130] >         "value":  "0"
	I1212 00:29:38.249356  525066 command_runner.go:130] >       },
	I1212 00:29:38.249364  525066 command_runner.go:130] >       "username":  "",
	I1212 00:29:38.249367  525066 command_runner.go:130] >       "pinned":  false
	I1212 00:29:38.249371  525066 command_runner.go:130] >     },
	I1212 00:29:38.249374  525066 command_runner.go:130] >     {
	I1212 00:29:38.249381  525066 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1212 00:29:38.249386  525066 command_runner.go:130] >       "repoTags":  [
	I1212 00:29:38.249391  525066 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1212 00:29:38.249394  525066 command_runner.go:130] >       ],
	I1212 00:29:38.249398  525066 command_runner.go:130] >       "repoDigests":  [
	I1212 00:29:38.249409  525066 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1212 00:29:38.249426  525066 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1212 00:29:38.249434  525066 command_runner.go:130] >       ],
	I1212 00:29:38.249438  525066 command_runner.go:130] >       "size":  "74106775",
	I1212 00:29:38.249450  525066 command_runner.go:130] >       "username":  "",
	I1212 00:29:38.249454  525066 command_runner.go:130] >       "pinned":  false
	I1212 00:29:38.249458  525066 command_runner.go:130] >     },
	I1212 00:29:38.249461  525066 command_runner.go:130] >     {
	I1212 00:29:38.249468  525066 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1212 00:29:38.249472  525066 command_runner.go:130] >       "repoTags":  [
	I1212 00:29:38.249481  525066 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1212 00:29:38.249484  525066 command_runner.go:130] >       ],
	I1212 00:29:38.249488  525066 command_runner.go:130] >       "repoDigests":  [
	I1212 00:29:38.249502  525066 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1212 00:29:38.249522  525066 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1212 00:29:38.249528  525066 command_runner.go:130] >       ],
	I1212 00:29:38.249532  525066 command_runner.go:130] >       "size":  "49822549",
	I1212 00:29:38.249535  525066 command_runner.go:130] >       "uid":  {
	I1212 00:29:38.249539  525066 command_runner.go:130] >         "value":  "0"
	I1212 00:29:38.249549  525066 command_runner.go:130] >       },
	I1212 00:29:38.249553  525066 command_runner.go:130] >       "username":  "",
	I1212 00:29:38.249556  525066 command_runner.go:130] >       "pinned":  false
	I1212 00:29:38.249559  525066 command_runner.go:130] >     },
	I1212 00:29:38.249562  525066 command_runner.go:130] >     {
	I1212 00:29:38.249568  525066 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1212 00:29:38.249572  525066 command_runner.go:130] >       "repoTags":  [
	I1212 00:29:38.249576  525066 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1212 00:29:38.249581  525066 command_runner.go:130] >       ],
	I1212 00:29:38.249586  525066 command_runner.go:130] >       "repoDigests":  [
	I1212 00:29:38.249598  525066 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1212 00:29:38.249606  525066 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1212 00:29:38.249613  525066 command_runner.go:130] >       ],
	I1212 00:29:38.249617  525066 command_runner.go:130] >       "size":  "519884",
	I1212 00:29:38.249621  525066 command_runner.go:130] >       "uid":  {
	I1212 00:29:38.249626  525066 command_runner.go:130] >         "value":  "65535"
	I1212 00:29:38.249633  525066 command_runner.go:130] >       },
	I1212 00:29:38.249642  525066 command_runner.go:130] >       "username":  "",
	I1212 00:29:38.249646  525066 command_runner.go:130] >       "pinned":  true
	I1212 00:29:38.249649  525066 command_runner.go:130] >     }
	I1212 00:29:38.249653  525066 command_runner.go:130] >   ]
	I1212 00:29:38.249656  525066 command_runner.go:130] > }
	I1212 00:29:38.252138  525066 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 00:29:38.252165  525066 crio.go:433] Images already preloaded, skipping extraction
	I1212 00:29:38.252226  525066 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:29:38.276626  525066 command_runner.go:130] > {
	I1212 00:29:38.276647  525066 command_runner.go:130] >   "images":  [
	I1212 00:29:38.276651  525066 command_runner.go:130] >     {
	I1212 00:29:38.276660  525066 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1212 00:29:38.276674  525066 command_runner.go:130] >       "repoTags":  [
	I1212 00:29:38.276681  525066 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1212 00:29:38.276684  525066 command_runner.go:130] >       ],
	I1212 00:29:38.276690  525066 command_runner.go:130] >       "repoDigests":  [
	I1212 00:29:38.276700  525066 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1212 00:29:38.276711  525066 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1212 00:29:38.276717  525066 command_runner.go:130] >       ],
	I1212 00:29:38.276721  525066 command_runner.go:130] >       "size":  "111333938",
	I1212 00:29:38.276725  525066 command_runner.go:130] >       "username":  "",
	I1212 00:29:38.276731  525066 command_runner.go:130] >       "pinned":  false
	I1212 00:29:38.276737  525066 command_runner.go:130] >     },
	I1212 00:29:38.276740  525066 command_runner.go:130] >     {
	I1212 00:29:38.276747  525066 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1212 00:29:38.276754  525066 command_runner.go:130] >       "repoTags":  [
	I1212 00:29:38.276760  525066 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1212 00:29:38.276767  525066 command_runner.go:130] >       ],
	I1212 00:29:38.276771  525066 command_runner.go:130] >       "repoDigests":  [
	I1212 00:29:38.276781  525066 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1212 00:29:38.276790  525066 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1212 00:29:38.276794  525066 command_runner.go:130] >       ],
	I1212 00:29:38.276799  525066 command_runner.go:130] >       "size":  "29037500",
	I1212 00:29:38.276807  525066 command_runner.go:130] >       "username":  "",
	I1212 00:29:38.276815  525066 command_runner.go:130] >       "pinned":  false
	I1212 00:29:38.276822  525066 command_runner.go:130] >     },
	I1212 00:29:38.276826  525066 command_runner.go:130] >     {
	I1212 00:29:38.276833  525066 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1212 00:29:38.276839  525066 command_runner.go:130] >       "repoTags":  [
	I1212 00:29:38.276845  525066 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1212 00:29:38.276850  525066 command_runner.go:130] >       ],
	I1212 00:29:38.276854  525066 command_runner.go:130] >       "repoDigests":  [
	I1212 00:29:38.276868  525066 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1212 00:29:38.276876  525066 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1212 00:29:38.276879  525066 command_runner.go:130] >       ],
	I1212 00:29:38.276883  525066 command_runner.go:130] >       "size":  "74491780",
	I1212 00:29:38.276891  525066 command_runner.go:130] >       "username":  "nonroot",
	I1212 00:29:38.276895  525066 command_runner.go:130] >       "pinned":  false
	I1212 00:29:38.276901  525066 command_runner.go:130] >     },
	I1212 00:29:38.276904  525066 command_runner.go:130] >     {
	I1212 00:29:38.276911  525066 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1212 00:29:38.276918  525066 command_runner.go:130] >       "repoTags":  [
	I1212 00:29:38.276922  525066 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1212 00:29:38.276925  525066 command_runner.go:130] >       ],
	I1212 00:29:38.276930  525066 command_runner.go:130] >       "repoDigests":  [
	I1212 00:29:38.276940  525066 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1212 00:29:38.276951  525066 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1212 00:29:38.276954  525066 command_runner.go:130] >       ],
	I1212 00:29:38.276973  525066 command_runner.go:130] >       "size":  "60857170",
	I1212 00:29:38.276977  525066 command_runner.go:130] >       "uid":  {
	I1212 00:29:38.276980  525066 command_runner.go:130] >         "value":  "0"
	I1212 00:29:38.276983  525066 command_runner.go:130] >       },
	I1212 00:29:38.276994  525066 command_runner.go:130] >       "username":  "",
	I1212 00:29:38.277001  525066 command_runner.go:130] >       "pinned":  false
	I1212 00:29:38.277004  525066 command_runner.go:130] >     },
	I1212 00:29:38.277007  525066 command_runner.go:130] >     {
	I1212 00:29:38.277014  525066 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1212 00:29:38.277019  525066 command_runner.go:130] >       "repoTags":  [
	I1212 00:29:38.277032  525066 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1212 00:29:38.277039  525066 command_runner.go:130] >       ],
	I1212 00:29:38.277043  525066 command_runner.go:130] >       "repoDigests":  [
	I1212 00:29:38.277051  525066 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1212 00:29:38.277066  525066 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1212 00:29:38.277070  525066 command_runner.go:130] >       ],
	I1212 00:29:38.277074  525066 command_runner.go:130] >       "size":  "84949999",
	I1212 00:29:38.277078  525066 command_runner.go:130] >       "uid":  {
	I1212 00:29:38.277086  525066 command_runner.go:130] >         "value":  "0"
	I1212 00:29:38.277089  525066 command_runner.go:130] >       },
	I1212 00:29:38.277093  525066 command_runner.go:130] >       "username":  "",
	I1212 00:29:38.277101  525066 command_runner.go:130] >       "pinned":  false
	I1212 00:29:38.277104  525066 command_runner.go:130] >     },
	I1212 00:29:38.277110  525066 command_runner.go:130] >     {
	I1212 00:29:38.277117  525066 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1212 00:29:38.277123  525066 command_runner.go:130] >       "repoTags":  [
	I1212 00:29:38.277129  525066 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1212 00:29:38.277132  525066 command_runner.go:130] >       ],
	I1212 00:29:38.277136  525066 command_runner.go:130] >       "repoDigests":  [
	I1212 00:29:38.277145  525066 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1212 00:29:38.277157  525066 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1212 00:29:38.277160  525066 command_runner.go:130] >       ],
	I1212 00:29:38.277164  525066 command_runner.go:130] >       "size":  "72170325",
	I1212 00:29:38.277167  525066 command_runner.go:130] >       "uid":  {
	I1212 00:29:38.277171  525066 command_runner.go:130] >         "value":  "0"
	I1212 00:29:38.277175  525066 command_runner.go:130] >       },
	I1212 00:29:38.277181  525066 command_runner.go:130] >       "username":  "",
	I1212 00:29:38.277186  525066 command_runner.go:130] >       "pinned":  false
	I1212 00:29:38.277191  525066 command_runner.go:130] >     },
	I1212 00:29:38.277194  525066 command_runner.go:130] >     {
	I1212 00:29:38.277203  525066 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1212 00:29:38.277209  525066 command_runner.go:130] >       "repoTags":  [
	I1212 00:29:38.277215  525066 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1212 00:29:38.277225  525066 command_runner.go:130] >       ],
	I1212 00:29:38.277229  525066 command_runner.go:130] >       "repoDigests":  [
	I1212 00:29:38.277238  525066 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1212 00:29:38.277251  525066 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1212 00:29:38.277255  525066 command_runner.go:130] >       ],
	I1212 00:29:38.277259  525066 command_runner.go:130] >       "size":  "74106775",
	I1212 00:29:38.277263  525066 command_runner.go:130] >       "username":  "",
	I1212 00:29:38.277269  525066 command_runner.go:130] >       "pinned":  false
	I1212 00:29:38.277273  525066 command_runner.go:130] >     },
	I1212 00:29:38.277276  525066 command_runner.go:130] >     {
	I1212 00:29:38.277283  525066 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1212 00:29:38.277289  525066 command_runner.go:130] >       "repoTags":  [
	I1212 00:29:38.277294  525066 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1212 00:29:38.277297  525066 command_runner.go:130] >       ],
	I1212 00:29:38.277301  525066 command_runner.go:130] >       "repoDigests":  [
	I1212 00:29:38.277309  525066 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1212 00:29:38.277326  525066 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1212 00:29:38.277330  525066 command_runner.go:130] >       ],
	I1212 00:29:38.277334  525066 command_runner.go:130] >       "size":  "49822549",
	I1212 00:29:38.277340  525066 command_runner.go:130] >       "uid":  {
	I1212 00:29:38.277344  525066 command_runner.go:130] >         "value":  "0"
	I1212 00:29:38.277347  525066 command_runner.go:130] >       },
	I1212 00:29:38.277351  525066 command_runner.go:130] >       "username":  "",
	I1212 00:29:38.277357  525066 command_runner.go:130] >       "pinned":  false
	I1212 00:29:38.277360  525066 command_runner.go:130] >     },
	I1212 00:29:38.277364  525066 command_runner.go:130] >     {
	I1212 00:29:38.277373  525066 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1212 00:29:38.277377  525066 command_runner.go:130] >       "repoTags":  [
	I1212 00:29:38.277390  525066 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1212 00:29:38.277394  525066 command_runner.go:130] >       ],
	I1212 00:29:38.277397  525066 command_runner.go:130] >       "repoDigests":  [
	I1212 00:29:38.277405  525066 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1212 00:29:38.277416  525066 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1212 00:29:38.277424  525066 command_runner.go:130] >       ],
	I1212 00:29:38.277429  525066 command_runner.go:130] >       "size":  "519884",
	I1212 00:29:38.277432  525066 command_runner.go:130] >       "uid":  {
	I1212 00:29:38.277438  525066 command_runner.go:130] >         "value":  "65535"
	I1212 00:29:38.277442  525066 command_runner.go:130] >       },
	I1212 00:29:38.277447  525066 command_runner.go:130] >       "username":  "",
	I1212 00:29:38.277453  525066 command_runner.go:130] >       "pinned":  true
	I1212 00:29:38.277456  525066 command_runner.go:130] >     }
	I1212 00:29:38.277459  525066 command_runner.go:130] >   ]
	I1212 00:29:38.277464  525066 command_runner.go:130] > }
	I1212 00:29:38.282583  525066 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 00:29:38.282606  525066 cache_images.go:86] Images are preloaded, skipping loading
	I1212 00:29:38.282613  525066 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1212 00:29:38.282744  525066 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-035643 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-035643 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 00:29:38.282831  525066 ssh_runner.go:195] Run: crio config
	I1212 00:29:38.339065  525066 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1212 00:29:38.339140  525066 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1212 00:29:38.339162  525066 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1212 00:29:38.339180  525066 command_runner.go:130] > #
	I1212 00:29:38.339218  525066 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1212 00:29:38.339243  525066 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1212 00:29:38.339261  525066 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1212 00:29:38.339304  525066 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1212 00:29:38.339327  525066 command_runner.go:130] > # reload'.
	I1212 00:29:38.339346  525066 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1212 00:29:38.339379  525066 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1212 00:29:38.339402  525066 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1212 00:29:38.339422  525066 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1212 00:29:38.339436  525066 command_runner.go:130] > [crio]
	I1212 00:29:38.339466  525066 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1212 00:29:38.339488  525066 command_runner.go:130] > # containers images, in this directory.
	I1212 00:29:38.339510  525066 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1212 00:29:38.339541  525066 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1212 00:29:38.339562  525066 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1212 00:29:38.339583  525066 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1212 00:29:38.339600  525066 command_runner.go:130] > # imagestore = ""
	I1212 00:29:38.339629  525066 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1212 00:29:38.339652  525066 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1212 00:29:38.339676  525066 command_runner.go:130] > # storage_driver = "overlay"
	I1212 00:29:38.339707  525066 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1212 00:29:38.339730  525066 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1212 00:29:38.339746  525066 command_runner.go:130] > # storage_option = [
	I1212 00:29:38.339762  525066 command_runner.go:130] > # ]
	I1212 00:29:38.339794  525066 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1212 00:29:38.339818  525066 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1212 00:29:38.339834  525066 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1212 00:29:38.339852  525066 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1212 00:29:38.339890  525066 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1212 00:29:38.339907  525066 command_runner.go:130] > # always happen on a node reboot
	I1212 00:29:38.339923  525066 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1212 00:29:38.339959  525066 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1212 00:29:38.339984  525066 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1212 00:29:38.340001  525066 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1212 00:29:38.340029  525066 command_runner.go:130] > # version_file_persist = ""
	I1212 00:29:38.340052  525066 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1212 00:29:38.340072  525066 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1212 00:29:38.340087  525066 command_runner.go:130] > # internal_wipe = true
	I1212 00:29:38.340117  525066 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1212 00:29:38.340140  525066 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1212 00:29:38.340157  525066 command_runner.go:130] > # internal_repair = true
	I1212 00:29:38.340175  525066 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1212 00:29:38.340208  525066 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1212 00:29:38.340228  525066 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1212 00:29:38.340246  525066 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1212 00:29:38.340277  525066 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1212 00:29:38.340300  525066 command_runner.go:130] > [crio.api]
	I1212 00:29:38.340319  525066 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1212 00:29:38.340336  525066 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1212 00:29:38.340365  525066 command_runner.go:130] > # IP address on which the stream server will listen.
	I1212 00:29:38.340387  525066 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1212 00:29:38.340407  525066 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1212 00:29:38.340447  525066 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1212 00:29:38.340822  525066 command_runner.go:130] > # stream_port = "0"
	I1212 00:29:38.340835  525066 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1212 00:29:38.341007  525066 command_runner.go:130] > # stream_enable_tls = false
	I1212 00:29:38.341018  525066 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1212 00:29:38.341210  525066 command_runner.go:130] > # stream_idle_timeout = ""
	I1212 00:29:38.341221  525066 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1212 00:29:38.341229  525066 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1212 00:29:38.341233  525066 command_runner.go:130] > # stream_tls_cert = ""
	I1212 00:29:38.341239  525066 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1212 00:29:38.341245  525066 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1212 00:29:38.341249  525066 command_runner.go:130] > # stream_tls_key = ""
	I1212 00:29:38.341255  525066 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1212 00:29:38.341261  525066 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1212 00:29:38.341272  525066 command_runner.go:130] > # automatically pick up the changes.
	I1212 00:29:38.341446  525066 command_runner.go:130] > # stream_tls_ca = ""
	I1212 00:29:38.341475  525066 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1212 00:29:38.341751  525066 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1212 00:29:38.341765  525066 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1212 00:29:38.341770  525066 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1212 00:29:38.341777  525066 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1212 00:29:38.341782  525066 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1212 00:29:38.341786  525066 command_runner.go:130] > [crio.runtime]
	I1212 00:29:38.341792  525066 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1212 00:29:38.341798  525066 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1212 00:29:38.341801  525066 command_runner.go:130] > # "nofile=1024:2048"
	I1212 00:29:38.341807  525066 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1212 00:29:38.341811  525066 command_runner.go:130] > # default_ulimits = [
	I1212 00:29:38.341814  525066 command_runner.go:130] > # ]
	I1212 00:29:38.341821  525066 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1212 00:29:38.341824  525066 command_runner.go:130] > # no_pivot = false
	I1212 00:29:38.341830  525066 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1212 00:29:38.341836  525066 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1212 00:29:38.341841  525066 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1212 00:29:38.341847  525066 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1212 00:29:38.341851  525066 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1212 00:29:38.341858  525066 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1212 00:29:38.342059  525066 command_runner.go:130] > # conmon = ""
	I1212 00:29:38.342069  525066 command_runner.go:130] > # Cgroup setting for conmon
	I1212 00:29:38.342077  525066 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1212 00:29:38.342081  525066 command_runner.go:130] > conmon_cgroup = "pod"
	I1212 00:29:38.342087  525066 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1212 00:29:38.342093  525066 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1212 00:29:38.342100  525066 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1212 00:29:38.342293  525066 command_runner.go:130] > # conmon_env = [
	I1212 00:29:38.342301  525066 command_runner.go:130] > # ]
	I1212 00:29:38.342307  525066 command_runner.go:130] > # Additional environment variables to set for all the
	I1212 00:29:38.342312  525066 command_runner.go:130] > # containers. These are overridden if set in the
	I1212 00:29:38.342318  525066 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1212 00:29:38.342321  525066 command_runner.go:130] > # default_env = [
	I1212 00:29:38.342325  525066 command_runner.go:130] > # ]
	I1212 00:29:38.342330  525066 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1212 00:29:38.342338  525066 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1212 00:29:38.342531  525066 command_runner.go:130] > # selinux = false
	I1212 00:29:38.342542  525066 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1212 00:29:38.342551  525066 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1212 00:29:38.342556  525066 command_runner.go:130] > # This option supports live configuration reload.
	I1212 00:29:38.342765  525066 command_runner.go:130] > # seccomp_profile = ""
	I1212 00:29:38.342777  525066 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1212 00:29:38.342783  525066 command_runner.go:130] > # This option supports live configuration reload.
	I1212 00:29:38.342787  525066 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1212 00:29:38.342804  525066 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1212 00:29:38.342810  525066 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1212 00:29:38.342817  525066 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1212 00:29:38.342823  525066 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1212 00:29:38.342828  525066 command_runner.go:130] > # This option supports live configuration reload.
	I1212 00:29:38.342833  525066 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1212 00:29:38.342838  525066 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1212 00:29:38.342842  525066 command_runner.go:130] > # the cgroup blockio controller.
	I1212 00:29:38.343029  525066 command_runner.go:130] > # blockio_config_file = ""
	I1212 00:29:38.343040  525066 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1212 00:29:38.343044  525066 command_runner.go:130] > # blockio parameters.
	I1212 00:29:38.343244  525066 command_runner.go:130] > # blockio_reload = false
	I1212 00:29:38.343255  525066 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1212 00:29:38.343260  525066 command_runner.go:130] > # irqbalance daemon.
	I1212 00:29:38.343265  525066 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1212 00:29:38.343271  525066 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1212 00:29:38.343278  525066 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1212 00:29:38.343285  525066 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1212 00:29:38.343472  525066 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1212 00:29:38.343488  525066 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1212 00:29:38.343494  525066 command_runner.go:130] > # This option supports live configuration reload.
	I1212 00:29:38.343668  525066 command_runner.go:130] > # rdt_config_file = ""
	I1212 00:29:38.343679  525066 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1212 00:29:38.343683  525066 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1212 00:29:38.343690  525066 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1212 00:29:38.343893  525066 command_runner.go:130] > # separate_pull_cgroup = ""
	I1212 00:29:38.343905  525066 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1212 00:29:38.343912  525066 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1212 00:29:38.343920  525066 command_runner.go:130] > # will be added.
	I1212 00:29:38.343925  525066 command_runner.go:130] > # default_capabilities = [
	I1212 00:29:38.344172  525066 command_runner.go:130] > # 	"CHOWN",
	I1212 00:29:38.344180  525066 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1212 00:29:38.344184  525066 command_runner.go:130] > # 	"FSETID",
	I1212 00:29:38.344187  525066 command_runner.go:130] > # 	"FOWNER",
	I1212 00:29:38.344191  525066 command_runner.go:130] > # 	"SETGID",
	I1212 00:29:38.344194  525066 command_runner.go:130] > # 	"SETUID",
	I1212 00:29:38.344217  525066 command_runner.go:130] > # 	"SETPCAP",
	I1212 00:29:38.344397  525066 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1212 00:29:38.344405  525066 command_runner.go:130] > # 	"KILL",
	I1212 00:29:38.344408  525066 command_runner.go:130] > # ]
	I1212 00:29:38.344417  525066 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1212 00:29:38.344424  525066 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1212 00:29:38.344614  525066 command_runner.go:130] > # add_inheritable_capabilities = false
	I1212 00:29:38.344634  525066 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1212 00:29:38.344641  525066 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1212 00:29:38.344645  525066 command_runner.go:130] > default_sysctls = [
	I1212 00:29:38.344818  525066 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1212 00:29:38.344834  525066 command_runner.go:130] > ]
	I1212 00:29:38.344839  525066 command_runner.go:130] > # List of devices on the host that a
	I1212 00:29:38.344846  525066 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1212 00:29:38.344850  525066 command_runner.go:130] > # allowed_devices = [
	I1212 00:29:38.345064  525066 command_runner.go:130] > # 	"/dev/fuse",
	I1212 00:29:38.345072  525066 command_runner.go:130] > # 	"/dev/net/tun",
	I1212 00:29:38.345076  525066 command_runner.go:130] > # ]
	I1212 00:29:38.345089  525066 command_runner.go:130] > # List of additional devices. specified as
	I1212 00:29:38.345098  525066 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1212 00:29:38.345141  525066 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1212 00:29:38.345151  525066 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1212 00:29:38.345155  525066 command_runner.go:130] > # additional_devices = [
	I1212 00:29:38.345354  525066 command_runner.go:130] > # ]
	I1212 00:29:38.345364  525066 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1212 00:29:38.345368  525066 command_runner.go:130] > # cdi_spec_dirs = [
	I1212 00:29:38.345371  525066 command_runner.go:130] > # 	"/etc/cdi",
	I1212 00:29:38.345585  525066 command_runner.go:130] > # 	"/var/run/cdi",
	I1212 00:29:38.345593  525066 command_runner.go:130] > # ]
	I1212 00:29:38.345600  525066 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1212 00:29:38.345606  525066 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1212 00:29:38.345609  525066 command_runner.go:130] > # Defaults to false.
	I1212 00:29:38.345614  525066 command_runner.go:130] > # device_ownership_from_security_context = false
	I1212 00:29:38.345652  525066 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1212 00:29:38.345661  525066 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1212 00:29:38.345665  525066 command_runner.go:130] > # hooks_dir = [
	I1212 00:29:38.345877  525066 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1212 00:29:38.345885  525066 command_runner.go:130] > # ]
	I1212 00:29:38.345892  525066 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1212 00:29:38.345899  525066 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1212 00:29:38.345904  525066 command_runner.go:130] > # its default mounts from the following two files:
	I1212 00:29:38.345907  525066 command_runner.go:130] > #
	I1212 00:29:38.345914  525066 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1212 00:29:38.345957  525066 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1212 00:29:38.345963  525066 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1212 00:29:38.345966  525066 command_runner.go:130] > #
	I1212 00:29:38.345972  525066 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1212 00:29:38.345979  525066 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1212 00:29:38.345986  525066 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1212 00:29:38.345991  525066 command_runner.go:130] > #      only add mounts it finds in this file.
	I1212 00:29:38.346020  525066 command_runner.go:130] > #
	I1212 00:29:38.346210  525066 command_runner.go:130] > # default_mounts_file = ""
	I1212 00:29:38.346221  525066 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1212 00:29:38.346228  525066 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1212 00:29:38.346444  525066 command_runner.go:130] > # pids_limit = -1
	I1212 00:29:38.346456  525066 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1212 00:29:38.346463  525066 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1212 00:29:38.346469  525066 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1212 00:29:38.346478  525066 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1212 00:29:38.346512  525066 command_runner.go:130] > # log_size_max = -1
	I1212 00:29:38.346523  525066 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1212 00:29:38.346724  525066 command_runner.go:130] > # log_to_journald = false
	I1212 00:29:38.346736  525066 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1212 00:29:38.346742  525066 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1212 00:29:38.346747  525066 command_runner.go:130] > # Path to directory for container attach sockets.
	I1212 00:29:38.347111  525066 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1212 00:29:38.347122  525066 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1212 00:29:38.347127  525066 command_runner.go:130] > # bind_mount_prefix = ""
	I1212 00:29:38.347132  525066 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1212 00:29:38.347136  525066 command_runner.go:130] > # read_only = false
	I1212 00:29:38.347142  525066 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1212 00:29:38.347149  525066 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1212 00:29:38.347186  525066 command_runner.go:130] > # live configuration reload.
	I1212 00:29:38.347359  525066 command_runner.go:130] > # log_level = "info"
	I1212 00:29:38.347376  525066 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1212 00:29:38.347381  525066 command_runner.go:130] > # This option supports live configuration reload.
	I1212 00:29:38.347597  525066 command_runner.go:130] > # log_filter = ""
	I1212 00:29:38.347608  525066 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1212 00:29:38.347615  525066 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1212 00:29:38.347619  525066 command_runner.go:130] > # separated by comma.
	I1212 00:29:38.347671  525066 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1212 00:29:38.347679  525066 command_runner.go:130] > # uid_mappings = ""
	I1212 00:29:38.347686  525066 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1212 00:29:38.347692  525066 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1212 00:29:38.347696  525066 command_runner.go:130] > # separated by comma.
	I1212 00:29:38.347704  525066 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1212 00:29:38.347707  525066 command_runner.go:130] > # gid_mappings = ""
	I1212 00:29:38.347714  525066 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1212 00:29:38.347746  525066 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1212 00:29:38.347757  525066 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1212 00:29:38.347765  525066 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1212 00:29:38.347769  525066 command_runner.go:130] > # minimum_mappable_uid = -1
	I1212 00:29:38.347775  525066 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1212 00:29:38.347781  525066 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1212 00:29:38.347787  525066 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1212 00:29:38.347822  525066 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1212 00:29:38.348158  525066 command_runner.go:130] > # minimum_mappable_gid = -1
	I1212 00:29:38.348170  525066 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1212 00:29:38.348176  525066 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1212 00:29:38.348182  525066 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1212 00:29:38.348415  525066 command_runner.go:130] > # ctr_stop_timeout = 30
	I1212 00:29:38.348427  525066 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1212 00:29:38.348433  525066 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1212 00:29:38.348438  525066 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1212 00:29:38.348442  525066 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1212 00:29:38.348641  525066 command_runner.go:130] > # drop_infra_ctr = true
	I1212 00:29:38.348653  525066 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1212 00:29:38.348659  525066 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1212 00:29:38.348666  525066 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1212 00:29:38.348674  525066 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1212 00:29:38.348712  525066 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1212 00:29:38.348725  525066 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1212 00:29:38.348731  525066 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1212 00:29:38.348736  525066 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1212 00:29:38.348935  525066 command_runner.go:130] > # shared_cpuset = ""
	I1212 00:29:38.348946  525066 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1212 00:29:38.348952  525066 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1212 00:29:38.348956  525066 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1212 00:29:38.348964  525066 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1212 00:29:38.349178  525066 command_runner.go:130] > # pinns_path = ""
	I1212 00:29:38.349189  525066 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1212 00:29:38.349195  525066 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1212 00:29:38.349199  525066 command_runner.go:130] > # enable_criu_support = true
	I1212 00:29:38.349214  525066 command_runner.go:130] > # Enable/disable the generation of the container,
	I1212 00:29:38.349253  525066 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1212 00:29:38.349272  525066 command_runner.go:130] > # enable_pod_events = false
	I1212 00:29:38.349291  525066 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1212 00:29:38.349322  525066 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1212 00:29:38.349505  525066 command_runner.go:130] > # default_runtime = "crun"
	I1212 00:29:38.349536  525066 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1212 00:29:38.349573  525066 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1212 00:29:38.349601  525066 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1212 00:29:38.349618  525066 command_runner.go:130] > # creation as a file is not desired either.
	I1212 00:29:38.349653  525066 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1212 00:29:38.349674  525066 command_runner.go:130] > # the hostname is being managed dynamically.
	I1212 00:29:38.349690  525066 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1212 00:29:38.349956  525066 command_runner.go:130] > # ]
	I1212 00:29:38.350003  525066 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1212 00:29:38.350025  525066 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1212 00:29:38.350043  525066 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1212 00:29:38.350074  525066 command_runner.go:130] > # Each entry in the table should follow the format:
	I1212 00:29:38.350093  525066 command_runner.go:130] > #
	I1212 00:29:38.350110  525066 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1212 00:29:38.350127  525066 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1212 00:29:38.350158  525066 command_runner.go:130] > # runtime_type = "oci"
	I1212 00:29:38.350179  525066 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1212 00:29:38.350201  525066 command_runner.go:130] > # inherit_default_runtime = false
	I1212 00:29:38.350218  525066 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1212 00:29:38.350253  525066 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1212 00:29:38.350271  525066 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1212 00:29:38.350287  525066 command_runner.go:130] > # monitor_env = []
	I1212 00:29:38.350317  525066 command_runner.go:130] > # privileged_without_host_devices = false
	I1212 00:29:38.350339  525066 command_runner.go:130] > # allowed_annotations = []
	I1212 00:29:38.350358  525066 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1212 00:29:38.350372  525066 command_runner.go:130] > # no_sync_log = false
	I1212 00:29:38.350402  525066 command_runner.go:130] > # default_annotations = {}
	I1212 00:29:38.350419  525066 command_runner.go:130] > # stream_websockets = false
	I1212 00:29:38.350436  525066 command_runner.go:130] > # seccomp_profile = ""
	I1212 00:29:38.350499  525066 command_runner.go:130] > # Where:
	I1212 00:29:38.350529  525066 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1212 00:29:38.350561  525066 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1212 00:29:38.350588  525066 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1212 00:29:38.350607  525066 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1212 00:29:38.350635  525066 command_runner.go:130] > #   in $PATH.
	I1212 00:29:38.350670  525066 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1212 00:29:38.350713  525066 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1212 00:29:38.350735  525066 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1212 00:29:38.350750  525066 command_runner.go:130] > #   state.
	I1212 00:29:38.350780  525066 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1212 00:29:38.350928  525066 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1212 00:29:38.351028  525066 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1212 00:29:38.351155  525066 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1212 00:29:38.351251  525066 command_runner.go:130] > #   the values from the default runtime on load time.
	I1212 00:29:38.351344  525066 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1212 00:29:38.351530  525066 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1212 00:29:38.351817  525066 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1212 00:29:38.352119  525066 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1212 00:29:38.352319  525066 command_runner.go:130] > #   The currently recognized values are:
	I1212 00:29:38.352557  525066 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1212 00:29:38.352766  525066 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1212 00:29:38.352929  525066 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1212 00:29:38.353036  525066 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1212 00:29:38.353153  525066 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1212 00:29:38.353519  525066 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1212 00:29:38.353569  525066 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1212 00:29:38.353580  525066 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1212 00:29:38.353587  525066 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1212 00:29:38.353593  525066 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1212 00:29:38.353637  525066 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1212 00:29:38.353645  525066 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1212 00:29:38.353652  525066 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1212 00:29:38.353658  525066 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1212 00:29:38.353664  525066 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1212 00:29:38.353679  525066 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1212 00:29:38.353695  525066 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1212 00:29:38.353699  525066 command_runner.go:130] > #   deprecated option "conmon".
	I1212 00:29:38.353706  525066 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1212 00:29:38.353766  525066 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1212 00:29:38.353805  525066 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1212 00:29:38.353814  525066 command_runner.go:130] > #   should be moved to the container's cgroup
	I1212 00:29:38.353822  525066 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1212 00:29:38.353826  525066 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1212 00:29:38.353834  525066 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1212 00:29:38.353838  525066 command_runner.go:130] > #   conmon-rs by using:
	I1212 00:29:38.353893  525066 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1212 00:29:38.353903  525066 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1212 00:29:38.353947  525066 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1212 00:29:38.353958  525066 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1212 00:29:38.353963  525066 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1212 00:29:38.353971  525066 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1212 00:29:38.353979  525066 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1212 00:29:38.353984  525066 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1212 00:29:38.353992  525066 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1212 00:29:38.354039  525066 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1212 00:29:38.354048  525066 command_runner.go:130] > #   when a machine crash happens.
	I1212 00:29:38.354056  525066 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1212 00:29:38.354064  525066 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1212 00:29:38.354100  525066 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1212 00:29:38.354106  525066 command_runner.go:130] > #   seccomp profile for the runtime.
	I1212 00:29:38.354113  525066 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1212 00:29:38.354120  525066 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1212 00:29:38.354123  525066 command_runner.go:130] > #
	I1212 00:29:38.354169  525066 command_runner.go:130] > # Using the seccomp notifier feature:
	I1212 00:29:38.354175  525066 command_runner.go:130] > #
	I1212 00:29:38.354188  525066 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1212 00:29:38.354195  525066 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1212 00:29:38.354198  525066 command_runner.go:130] > #
	I1212 00:29:38.354204  525066 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1212 00:29:38.354210  525066 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1212 00:29:38.354212  525066 command_runner.go:130] > #
	I1212 00:29:38.354258  525066 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1212 00:29:38.354270  525066 command_runner.go:130] > # feature.
	I1212 00:29:38.354273  525066 command_runner.go:130] > #
	I1212 00:29:38.354279  525066 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1212 00:29:38.354286  525066 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1212 00:29:38.354292  525066 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1212 00:29:38.354298  525066 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1212 00:29:38.354350  525066 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1212 00:29:38.354355  525066 command_runner.go:130] > #
	I1212 00:29:38.354362  525066 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1212 00:29:38.354402  525066 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1212 00:29:38.354408  525066 command_runner.go:130] > #
	I1212 00:29:38.354414  525066 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1212 00:29:38.354420  525066 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1212 00:29:38.354423  525066 command_runner.go:130] > #
	I1212 00:29:38.354429  525066 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1212 00:29:38.354471  525066 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1212 00:29:38.354477  525066 command_runner.go:130] > # limitation.
	I1212 00:29:38.354481  525066 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1212 00:29:38.354485  525066 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1212 00:29:38.354492  525066 command_runner.go:130] > runtime_type = ""
	I1212 00:29:38.354498  525066 command_runner.go:130] > runtime_root = "/run/crun"
	I1212 00:29:38.354502  525066 command_runner.go:130] > inherit_default_runtime = false
	I1212 00:29:38.354538  525066 command_runner.go:130] > runtime_config_path = ""
	I1212 00:29:38.354545  525066 command_runner.go:130] > container_min_memory = ""
	I1212 00:29:38.354550  525066 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1212 00:29:38.354554  525066 command_runner.go:130] > monitor_cgroup = "pod"
	I1212 00:29:38.354558  525066 command_runner.go:130] > monitor_exec_cgroup = ""
	I1212 00:29:38.354561  525066 command_runner.go:130] > allowed_annotations = [
	I1212 00:29:38.354565  525066 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1212 00:29:38.354568  525066 command_runner.go:130] > ]
	I1212 00:29:38.354573  525066 command_runner.go:130] > privileged_without_host_devices = false
	I1212 00:29:38.354577  525066 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1212 00:29:38.354588  525066 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1212 00:29:38.354592  525066 command_runner.go:130] > runtime_type = ""
	I1212 00:29:38.354595  525066 command_runner.go:130] > runtime_root = "/run/runc"
	I1212 00:29:38.354647  525066 command_runner.go:130] > inherit_default_runtime = false
	I1212 00:29:38.354654  525066 command_runner.go:130] > runtime_config_path = ""
	I1212 00:29:38.354659  525066 command_runner.go:130] > container_min_memory = ""
	I1212 00:29:38.354663  525066 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1212 00:29:38.354667  525066 command_runner.go:130] > monitor_cgroup = "pod"
	I1212 00:29:38.354671  525066 command_runner.go:130] > monitor_exec_cgroup = ""
	I1212 00:29:38.354675  525066 command_runner.go:130] > privileged_without_host_devices = false
	I1212 00:29:38.354692  525066 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1212 00:29:38.354700  525066 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1212 00:29:38.354706  525066 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1212 00:29:38.354719  525066 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1212 00:29:38.354731  525066 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1212 00:29:38.354778  525066 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1212 00:29:38.354787  525066 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1212 00:29:38.354793  525066 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1212 00:29:38.354803  525066 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1212 00:29:38.354848  525066 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1212 00:29:38.354862  525066 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1212 00:29:38.354870  525066 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1212 00:29:38.354909  525066 command_runner.go:130] > # Example:
	I1212 00:29:38.354916  525066 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1212 00:29:38.354921  525066 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1212 00:29:38.354929  525066 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1212 00:29:38.354970  525066 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1212 00:29:38.354976  525066 command_runner.go:130] > # cpuset = "0-1"
	I1212 00:29:38.354979  525066 command_runner.go:130] > # cpushares = "5"
	I1212 00:29:38.354982  525066 command_runner.go:130] > # cpuquota = "1000"
	I1212 00:29:38.354986  525066 command_runner.go:130] > # cpuperiod = "100000"
	I1212 00:29:38.354989  525066 command_runner.go:130] > # cpulimit = "35"
	I1212 00:29:38.354992  525066 command_runner.go:130] > # Where:
	I1212 00:29:38.355002  525066 command_runner.go:130] > # The workload name is workload-type.
	I1212 00:29:38.355009  525066 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1212 00:29:38.355015  525066 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1212 00:29:38.355066  525066 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1212 00:29:38.355077  525066 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1212 00:29:38.355083  525066 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1212 00:29:38.355088  525066 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1212 00:29:38.355095  525066 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1212 00:29:38.355099  525066 command_runner.go:130] > # Default value is set to true
	I1212 00:29:38.355467  525066 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1212 00:29:38.355620  525066 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1212 00:29:38.355721  525066 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1212 00:29:38.355871  525066 command_runner.go:130] > # Default value is set to 'false'
	I1212 00:29:38.356033  525066 command_runner.go:130] > # disable_hostport_mapping = false
	I1212 00:29:38.356163  525066 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1212 00:29:38.356284  525066 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1212 00:29:38.356367  525066 command_runner.go:130] > # timezone = ""
	I1212 00:29:38.356485  525066 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1212 00:29:38.356560  525066 command_runner.go:130] > #
	I1212 00:29:38.356636  525066 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1212 00:29:38.356830  525066 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1212 00:29:38.356937  525066 command_runner.go:130] > [crio.image]
	I1212 00:29:38.357065  525066 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1212 00:29:38.357172  525066 command_runner.go:130] > # default_transport = "docker://"
	I1212 00:29:38.357258  525066 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1212 00:29:38.357455  525066 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1212 00:29:38.357729  525066 command_runner.go:130] > # global_auth_file = ""
	I1212 00:29:38.357787  525066 command_runner.go:130] > # The image used to instantiate infra containers.
	I1212 00:29:38.357796  525066 command_runner.go:130] > # This option supports live configuration reload.
	I1212 00:29:38.357801  525066 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1212 00:29:38.357809  525066 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1212 00:29:38.357821  525066 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1212 00:29:38.357827  525066 command_runner.go:130] > # This option supports live configuration reload.
	I1212 00:29:38.357837  525066 command_runner.go:130] > # pause_image_auth_file = ""
	I1212 00:29:38.357843  525066 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1212 00:29:38.357850  525066 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1212 00:29:38.358627  525066 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1212 00:29:38.358638  525066 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1212 00:29:38.358643  525066 command_runner.go:130] > # pause_command = "/pause"
	I1212 00:29:38.358649  525066 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1212 00:29:38.358655  525066 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1212 00:29:38.358662  525066 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1212 00:29:38.358668  525066 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1212 00:29:38.358674  525066 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1212 00:29:38.358693  525066 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1212 00:29:38.358700  525066 command_runner.go:130] > # pinned_images = [
	I1212 00:29:38.358703  525066 command_runner.go:130] > # ]
	I1212 00:29:38.358709  525066 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1212 00:29:38.358716  525066 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1212 00:29:38.358723  525066 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1212 00:29:38.358729  525066 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1212 00:29:38.358734  525066 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1212 00:29:38.358740  525066 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1212 00:29:38.358745  525066 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1212 00:29:38.358752  525066 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1212 00:29:38.358758  525066 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1212 00:29:38.358764  525066 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1212 00:29:38.358771  525066 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1212 00:29:38.358776  525066 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1212 00:29:38.358782  525066 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1212 00:29:38.358788  525066 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1212 00:29:38.358791  525066 command_runner.go:130] > # changing them here.
	I1212 00:29:38.358801  525066 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1212 00:29:38.358805  525066 command_runner.go:130] > # insecure_registries = [
	I1212 00:29:38.358808  525066 command_runner.go:130] > # ]
	I1212 00:29:38.358814  525066 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1212 00:29:38.358828  525066 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1212 00:29:38.358833  525066 command_runner.go:130] > # image_volumes = "mkdir"
	I1212 00:29:38.358838  525066 command_runner.go:130] > # Temporary directory to use for storing big files
	I1212 00:29:38.358842  525066 command_runner.go:130] > # big_files_temporary_dir = ""
	I1212 00:29:38.358848  525066 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1212 00:29:38.358855  525066 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1212 00:29:38.358860  525066 command_runner.go:130] > # auto_reload_registries = false
	I1212 00:29:38.358866  525066 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1212 00:29:38.358874  525066 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1212 00:29:38.358881  525066 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1212 00:29:38.358885  525066 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1212 00:29:38.358889  525066 command_runner.go:130] > # The mode of short name resolution.
	I1212 00:29:38.358896  525066 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1212 00:29:38.358903  525066 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1212 00:29:38.358908  525066 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1212 00:29:38.358913  525066 command_runner.go:130] > # short_name_mode = "enforcing"
	I1212 00:29:38.358919  525066 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1212 00:29:38.358925  525066 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1212 00:29:38.358929  525066 command_runner.go:130] > # oci_artifact_mount_support = true
	I1212 00:29:38.358935  525066 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1212 00:29:38.358938  525066 command_runner.go:130] > # CNI plugins.
	I1212 00:29:38.358941  525066 command_runner.go:130] > [crio.network]
	I1212 00:29:38.358947  525066 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1212 00:29:38.358952  525066 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1212 00:29:38.358956  525066 command_runner.go:130] > # cni_default_network = ""
	I1212 00:29:38.358966  525066 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1212 00:29:38.358970  525066 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1212 00:29:38.358975  525066 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1212 00:29:38.358979  525066 command_runner.go:130] > # plugin_dirs = [
	I1212 00:29:38.358982  525066 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1212 00:29:38.358985  525066 command_runner.go:130] > # ]
	I1212 00:29:38.358989  525066 command_runner.go:130] > # List of included pod metrics.
	I1212 00:29:38.358993  525066 command_runner.go:130] > # included_pod_metrics = [
	I1212 00:29:38.359000  525066 command_runner.go:130] > # ]
	I1212 00:29:38.359005  525066 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1212 00:29:38.359010  525066 command_runner.go:130] > [crio.metrics]
	I1212 00:29:38.359017  525066 command_runner.go:130] > # Globally enable or disable metrics support.
	I1212 00:29:38.359024  525066 command_runner.go:130] > # enable_metrics = false
	I1212 00:29:38.359029  525066 command_runner.go:130] > # Specify enabled metrics collectors.
	I1212 00:29:38.359034  525066 command_runner.go:130] > # Per default all metrics are enabled.
	I1212 00:29:38.359040  525066 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1212 00:29:38.359048  525066 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1212 00:29:38.359054  525066 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1212 00:29:38.359068  525066 command_runner.go:130] > # metrics_collectors = [
	I1212 00:29:38.359072  525066 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1212 00:29:38.359076  525066 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1212 00:29:38.359079  525066 command_runner.go:130] > # 	"containers_oom_total",
	I1212 00:29:38.359083  525066 command_runner.go:130] > # 	"processes_defunct",
	I1212 00:29:38.359087  525066 command_runner.go:130] > # 	"operations_total",
	I1212 00:29:38.359091  525066 command_runner.go:130] > # 	"operations_latency_seconds",
	I1212 00:29:38.359095  525066 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1212 00:29:38.359099  525066 command_runner.go:130] > # 	"operations_errors_total",
	I1212 00:29:38.359103  525066 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1212 00:29:38.359107  525066 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1212 00:29:38.359111  525066 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1212 00:29:38.359115  525066 command_runner.go:130] > # 	"image_pulls_success_total",
	I1212 00:29:38.359119  525066 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1212 00:29:38.359123  525066 command_runner.go:130] > # 	"containers_oom_count_total",
	I1212 00:29:38.359128  525066 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1212 00:29:38.359132  525066 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1212 00:29:38.359137  525066 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1212 00:29:38.359139  525066 command_runner.go:130] > # ]
	I1212 00:29:38.359145  525066 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1212 00:29:38.359149  525066 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1212 00:29:38.359155  525066 command_runner.go:130] > # The port on which the metrics server will listen.
	I1212 00:29:38.359158  525066 command_runner.go:130] > # metrics_port = 9090
	I1212 00:29:38.359167  525066 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1212 00:29:38.359171  525066 command_runner.go:130] > # metrics_socket = ""
	I1212 00:29:38.359176  525066 command_runner.go:130] > # The certificate for the secure metrics server.
	I1212 00:29:38.359182  525066 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1212 00:29:38.359188  525066 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1212 00:29:38.359192  525066 command_runner.go:130] > # certificate on any modification event.
	I1212 00:29:38.359196  525066 command_runner.go:130] > # metrics_cert = ""
	I1212 00:29:38.359201  525066 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1212 00:29:38.359206  525066 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1212 00:29:38.359209  525066 command_runner.go:130] > # metrics_key = ""
	I1212 00:29:38.359214  525066 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1212 00:29:38.359218  525066 command_runner.go:130] > [crio.tracing]
	I1212 00:29:38.359224  525066 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1212 00:29:38.359227  525066 command_runner.go:130] > # enable_tracing = false
	I1212 00:29:38.359233  525066 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1212 00:29:38.359237  525066 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1212 00:29:38.359243  525066 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1212 00:29:38.359249  525066 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1212 00:29:38.359253  525066 command_runner.go:130] > # CRI-O NRI configuration.
	I1212 00:29:38.359256  525066 command_runner.go:130] > [crio.nri]
	I1212 00:29:38.359260  525066 command_runner.go:130] > # Globally enable or disable NRI.
	I1212 00:29:38.359458  525066 command_runner.go:130] > # enable_nri = true
	I1212 00:29:38.359492  525066 command_runner.go:130] > # NRI socket to listen on.
	I1212 00:29:38.359531  525066 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1212 00:29:38.359552  525066 command_runner.go:130] > # NRI plugin directory to use.
	I1212 00:29:38.359571  525066 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1212 00:29:38.359603  525066 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1212 00:29:38.359625  525066 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1212 00:29:38.359646  525066 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1212 00:29:38.359766  525066 command_runner.go:130] > # nri_disable_connections = false
	I1212 00:29:38.359799  525066 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1212 00:29:38.359833  525066 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1212 00:29:38.359860  525066 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1212 00:29:38.359876  525066 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1212 00:29:38.359893  525066 command_runner.go:130] > # NRI default validator configuration.
	I1212 00:29:38.359933  525066 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1212 00:29:38.359959  525066 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1212 00:29:38.359990  525066 command_runner.go:130] > # can be restricted/rejected:
	I1212 00:29:38.360015  525066 command_runner.go:130] > # - OCI hook injection
	I1212 00:29:38.360033  525066 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1212 00:29:38.360064  525066 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1212 00:29:38.360089  525066 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1212 00:29:38.360107  525066 command_runner.go:130] > # - adjustment of linux namespaces
	I1212 00:29:38.360127  525066 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1212 00:29:38.360166  525066 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1212 00:29:38.360186  525066 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1212 00:29:38.360201  525066 command_runner.go:130] > #
	I1212 00:29:38.360237  525066 command_runner.go:130] > # [crio.nri.default_validator]
	I1212 00:29:38.360255  525066 command_runner.go:130] > # nri_enable_default_validator = false
	I1212 00:29:38.360272  525066 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1212 00:29:38.360303  525066 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1212 00:29:38.360330  525066 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1212 00:29:38.360348  525066 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1212 00:29:38.360476  525066 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1212 00:29:38.360648  525066 command_runner.go:130] > # nri_validator_required_plugins = [
	I1212 00:29:38.360681  525066 command_runner.go:130] > # ]
	I1212 00:29:38.360704  525066 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1212 00:29:38.360740  525066 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1212 00:29:38.360764  525066 command_runner.go:130] > [crio.stats]
	I1212 00:29:38.360783  525066 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1212 00:29:38.360814  525066 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1212 00:29:38.360847  525066 command_runner.go:130] > # stats_collection_period = 0
	I1212 00:29:38.360867  525066 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1212 00:29:38.360905  525066 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1212 00:29:38.360921  525066 command_runner.go:130] > # collection_period = 0
	I1212 00:29:38.360984  525066 command_runner.go:130] ! time="2025-12-12T00:29:38.313366715Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1212 00:29:38.361015  525066 command_runner.go:130] ! time="2025-12-12T00:29:38.313641917Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1212 00:29:38.361052  525066 command_runner.go:130] ! time="2025-12-12T00:29:38.313871475Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1212 00:29:38.361075  525066 command_runner.go:130] ! time="2025-12-12T00:29:38.314022397Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1212 00:29:38.361124  525066 command_runner.go:130] ! time="2025-12-12T00:29:38.314372427Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:29:38.361154  525066 command_runner.go:130] ! time="2025-12-12T00:29:38.31485409Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1212 00:29:38.361178  525066 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1212 00:29:38.361311  525066 cni.go:84] Creating CNI manager for ""
	I1212 00:29:38.361353  525066 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 00:29:38.361385  525066 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 00:29:38.361436  525066 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-035643 NodeName:functional-035643 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 00:29:38.361629  525066 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-035643"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 00:29:38.361753  525066 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 00:29:38.369085  525066 command_runner.go:130] > kubeadm
	I1212 00:29:38.369101  525066 command_runner.go:130] > kubectl
	I1212 00:29:38.369105  525066 command_runner.go:130] > kubelet
	I1212 00:29:38.369321  525066 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 00:29:38.369385  525066 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 00:29:38.376829  525066 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1212 00:29:38.389638  525066 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1212 00:29:38.402701  525066 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1212 00:29:38.415693  525066 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1212 00:29:38.420581  525066 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1212 00:29:38.420662  525066 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:29:38.566232  525066 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:29:39.219049  525066 certs.go:69] Setting up /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643 for IP: 192.168.49.2
	I1212 00:29:39.219079  525066 certs.go:195] generating shared ca certs ...
	I1212 00:29:39.219096  525066 certs.go:227] acquiring lock for ca certs: {Name:mk856824cf2126fa3d2975ef18e195b6ab1234f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:29:39.219238  525066 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22101-487723/.minikube/ca.key
	I1212 00:29:39.219285  525066 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22101-487723/.minikube/proxy-client-ca.key
	I1212 00:29:39.219292  525066 certs.go:257] generating profile certs ...
	I1212 00:29:39.219491  525066 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/client.key
	I1212 00:29:39.219603  525066 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/apiserver.key.8a9a2493
	I1212 00:29:39.219699  525066 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/proxy-client.key
	I1212 00:29:39.219742  525066 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 00:29:39.219761  525066 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1212 00:29:39.219773  525066 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 00:29:39.219783  525066 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 00:29:39.219798  525066 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1212 00:29:39.219843  525066 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1212 00:29:39.219860  525066 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1212 00:29:39.219871  525066 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1212 00:29:39.219967  525066 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/490954.pem (1338 bytes)
	W1212 00:29:39.220038  525066 certs.go:480] ignoring /home/jenkins/minikube-integration/22101-487723/.minikube/certs/490954_empty.pem, impossibly tiny 0 bytes
	I1212 00:29:39.220049  525066 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca-key.pem (1679 bytes)
	I1212 00:29:39.220117  525066 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca.pem (1078 bytes)
	I1212 00:29:39.220147  525066 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/cert.pem (1123 bytes)
	I1212 00:29:39.220202  525066 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/key.pem (1679 bytes)
	I1212 00:29:39.220256  525066 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-487723/.minikube/files/etc/ssl/certs/4909542.pem (1708 bytes)
	I1212 00:29:39.220332  525066 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/490954.pem -> /usr/share/ca-certificates/490954.pem
	I1212 00:29:39.220378  525066 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/files/etc/ssl/certs/4909542.pem -> /usr/share/ca-certificates/4909542.pem
	I1212 00:29:39.220396  525066 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:29:39.221003  525066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 00:29:39.242927  525066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 00:29:39.262484  525066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 00:29:39.285732  525066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 00:29:39.303346  525066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1212 00:29:39.320786  525066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 00:29:39.338821  525066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 00:29:39.356806  525066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 00:29:39.374381  525066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/certs/490954.pem --> /usr/share/ca-certificates/490954.pem (1338 bytes)
	I1212 00:29:39.392333  525066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/files/etc/ssl/certs/4909542.pem --> /usr/share/ca-certificates/4909542.pem (1708 bytes)
	I1212 00:29:39.410089  525066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 00:29:39.427383  525066 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 00:29:39.439725  525066 ssh_runner.go:195] Run: openssl version
	I1212 00:29:39.445636  525066 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1212 00:29:39.445982  525066 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/490954.pem
	I1212 00:29:39.453236  525066 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/490954.pem /etc/ssl/certs/490954.pem
	I1212 00:29:39.460672  525066 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/490954.pem
	I1212 00:29:39.464184  525066 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 12 00:21 /usr/share/ca-certificates/490954.pem
	I1212 00:29:39.464289  525066 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 00:21 /usr/share/ca-certificates/490954.pem
	I1212 00:29:39.464344  525066 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/490954.pem
	I1212 00:29:39.505960  525066 command_runner.go:130] > 51391683
	I1212 00:29:39.506560  525066 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 00:29:39.514611  525066 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4909542.pem
	I1212 00:29:39.522360  525066 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4909542.pem /etc/ssl/certs/4909542.pem
	I1212 00:29:39.531109  525066 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4909542.pem
	I1212 00:29:39.534913  525066 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 12 00:21 /usr/share/ca-certificates/4909542.pem
	I1212 00:29:39.535312  525066 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 00:21 /usr/share/ca-certificates/4909542.pem
	I1212 00:29:39.535374  525066 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4909542.pem
	I1212 00:29:39.578207  525066 command_runner.go:130] > 3ec20f2e
	I1212 00:29:39.578374  525066 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 00:29:39.586281  525066 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:29:39.593845  525066 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 00:29:39.601415  525066 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:29:39.605435  525066 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 12 00:11 /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:29:39.605483  525066 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 00:11 /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:29:39.605537  525066 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:29:39.646250  525066 command_runner.go:130] > b5213941
	I1212 00:29:39.646757  525066 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 00:29:39.654391  525066 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 00:29:39.658287  525066 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 00:29:39.658314  525066 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1212 00:29:39.658322  525066 command_runner.go:130] > Device: 259,1	Inode: 2360480     Links: 1
	I1212 00:29:39.658330  525066 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 00:29:39.658336  525066 command_runner.go:130] > Access: 2025-12-12 00:25:30.972268820 +0000
	I1212 00:29:39.658341  525066 command_runner.go:130] > Modify: 2025-12-12 00:21:25.329898534 +0000
	I1212 00:29:39.658346  525066 command_runner.go:130] > Change: 2025-12-12 00:21:25.329898534 +0000
	I1212 00:29:39.658351  525066 command_runner.go:130] >  Birth: 2025-12-12 00:21:25.329898534 +0000
	I1212 00:29:39.658416  525066 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 00:29:39.699997  525066 command_runner.go:130] > Certificate will not expire
	I1212 00:29:39.700109  525066 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 00:29:39.748952  525066 command_runner.go:130] > Certificate will not expire
	I1212 00:29:39.749499  525066 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 00:29:39.797710  525066 command_runner.go:130] > Certificate will not expire
	I1212 00:29:39.798154  525066 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 00:29:39.843103  525066 command_runner.go:130] > Certificate will not expire
	I1212 00:29:39.843601  525066 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 00:29:39.887374  525066 command_runner.go:130] > Certificate will not expire
	I1212 00:29:39.887871  525066 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 00:29:39.942362  525066 command_runner.go:130] > Certificate will not expire
	I1212 00:29:39.942946  525066 kubeadm.go:401] StartCluster: {Name:functional-035643 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-035643 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:29:39.943046  525066 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 00:29:39.943208  525066 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 00:29:39.985575  525066 cri.go:89] found id: ""
	I1212 00:29:39.985700  525066 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 00:29:39.993609  525066 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1212 00:29:39.993681  525066 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1212 00:29:39.993702  525066 command_runner.go:130] > /var/lib/minikube/etcd:
	I1212 00:29:39.994895  525066 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1212 00:29:39.994945  525066 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1212 00:29:39.995038  525066 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 00:29:40.006978  525066 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:29:40.007554  525066 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-035643" does not appear in /home/jenkins/minikube-integration/22101-487723/kubeconfig
	I1212 00:29:40.007785  525066 kubeconfig.go:62] /home/jenkins/minikube-integration/22101-487723/kubeconfig needs updating (will repair): [kubeconfig missing "functional-035643" cluster setting kubeconfig missing "functional-035643" context setting]
	I1212 00:29:40.008175  525066 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-487723/kubeconfig: {Name:mk40d877648a1b47389942ad828ec218ac64f642 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:29:40.008787  525066 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22101-487723/kubeconfig
	I1212 00:29:40.009179  525066 kapi.go:59] client config for functional-035643: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/client.crt", KeyFile:"/home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/client.key", CAFile:"/home/jenkins/minikube-integration/22101-487723/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 00:29:40.009975  525066 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1212 00:29:40.010118  525066 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1212 00:29:40.010148  525066 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1212 00:29:40.010168  525066 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1212 00:29:40.010204  525066 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1212 00:29:40.010223  525066 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1212 00:29:40.010646  525066 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 00:29:40.025803  525066 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1212 00:29:40.025893  525066 kubeadm.go:602] duration metric: took 30.929693ms to restartPrimaryControlPlane
	I1212 00:29:40.025918  525066 kubeadm.go:403] duration metric: took 82.978705ms to StartCluster
	I1212 00:29:40.025961  525066 settings.go:142] acquiring lock: {Name:mk274c10b2238dc32d72b68ac2e1ec517b8a72b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:29:40.026057  525066 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22101-487723/kubeconfig
	I1212 00:29:40.026847  525066 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-487723/kubeconfig: {Name:mk40d877648a1b47389942ad828ec218ac64f642 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:29:40.027182  525066 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 00:29:40.027614  525066 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 00:29:40.027718  525066 addons.go:70] Setting storage-provisioner=true in profile "functional-035643"
	I1212 00:29:40.027733  525066 addons.go:239] Setting addon storage-provisioner=true in "functional-035643"
	I1212 00:29:40.027759  525066 host.go:66] Checking if "functional-035643" exists ...
	I1212 00:29:40.027683  525066 config.go:182] Loaded profile config "functional-035643": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 00:29:40.027963  525066 addons.go:70] Setting default-storageclass=true in profile "functional-035643"
	I1212 00:29:40.028014  525066 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-035643"
	I1212 00:29:40.028265  525066 cli_runner.go:164] Run: docker container inspect functional-035643 --format={{.State.Status}}
	I1212 00:29:40.028431  525066 cli_runner.go:164] Run: docker container inspect functional-035643 --format={{.State.Status}}
	I1212 00:29:40.031408  525066 out.go:179] * Verifying Kubernetes components...
	I1212 00:29:40.035144  525066 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:29:40.072983  525066 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22101-487723/kubeconfig
	I1212 00:29:40.073191  525066 kapi.go:59] client config for functional-035643: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/client.crt", KeyFile:"/home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/client.key", CAFile:"/home/jenkins/minikube-integration/22101-487723/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 00:29:40.073564  525066 addons.go:239] Setting addon default-storageclass=true in "functional-035643"
	I1212 00:29:40.073635  525066 host.go:66] Checking if "functional-035643" exists ...
	I1212 00:29:40.074143  525066 cli_runner.go:164] Run: docker container inspect functional-035643 --format={{.State.Status}}
	I1212 00:29:40.079735  525066 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 00:29:40.083203  525066 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:29:40.083224  525066 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 00:29:40.083308  525066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
	I1212 00:29:40.126926  525066 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 00:29:40.126953  525066 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 00:29:40.127024  525066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
	I1212 00:29:40.157562  525066 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/functional-035643/id_rsa Username:docker}
	I1212 00:29:40.176759  525066 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/functional-035643/id_rsa Username:docker}
	I1212 00:29:40.228329  525066 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:29:40.297459  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:29:40.324896  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:29:40.970121  525066 node_ready.go:35] waiting up to 6m0s for node "functional-035643" to be "Ready" ...
	I1212 00:29:40.970322  525066 type.go:168] "Request Body" body=""
	I1212 00:29:40.970407  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:40.970561  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:29:40.970616  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:40.970718  525066 retry.go:31] will retry after 204.18222ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:40.970890  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:29:40.970976  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:40.971113  525066 retry.go:31] will retry after 159.994769ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:40.971100  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:41.131658  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:29:41.175423  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:29:41.193550  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:29:41.193607  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:41.193625  525066 retry.go:31] will retry after 255.861028ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:41.245543  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:29:41.245583  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:41.245622  525066 retry.go:31] will retry after 363.545377ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:41.449762  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:29:41.471214  525066 type.go:168] "Request Body" body=""
	I1212 00:29:41.471319  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:41.471599  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:41.515695  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:29:41.515762  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:41.515785  525066 retry.go:31] will retry after 558.343872ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:41.610204  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:29:41.681946  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:29:41.682005  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:41.682029  525066 retry.go:31] will retry after 553.13192ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:41.971401  525066 type.go:168] "Request Body" body=""
	I1212 00:29:41.971545  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:41.971960  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:42.075338  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:29:42.153789  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:29:42.153831  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:42.153875  525066 retry.go:31] will retry after 562.779161ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:42.238244  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:29:42.309134  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:29:42.309235  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:42.309278  525066 retry.go:31] will retry after 839.848798ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:42.470350  525066 type.go:168] "Request Body" body=""
	I1212 00:29:42.470438  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:42.470717  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:42.717299  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:29:42.779260  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:29:42.779300  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:42.779319  525066 retry.go:31] will retry after 1.384955704s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:42.970802  525066 type.go:168] "Request Body" body=""
	I1212 00:29:42.970878  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:42.971167  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:29:42.971212  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:29:43.149494  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:29:43.213920  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:29:43.218125  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:43.218200  525066 retry.go:31] will retry after 1.154245365s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:43.470517  525066 type.go:168] "Request Body" body=""
	I1212 00:29:43.470604  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:43.470922  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:43.970580  525066 type.go:168] "Request Body" body=""
	I1212 00:29:43.970743  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:43.971073  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:44.165470  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:29:44.225816  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:29:44.225880  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:44.225901  525066 retry.go:31] will retry after 2.063043455s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:44.373318  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:29:44.437999  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:29:44.441831  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:44.441865  525066 retry.go:31] will retry after 1.856604218s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:44.471071  525066 type.go:168] "Request Body" body=""
	I1212 00:29:44.471144  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:44.471437  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:44.971289  525066 type.go:168] "Request Body" body=""
	I1212 00:29:44.971379  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:44.971730  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:29:44.971780  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:29:45.470482  525066 type.go:168] "Request Body" body=""
	I1212 00:29:45.470622  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:45.470959  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:45.970491  525066 type.go:168] "Request Body" body=""
	I1212 00:29:45.970565  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:45.970940  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:46.289221  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:29:46.298644  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:29:46.387298  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:29:46.387341  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:46.387359  525066 retry.go:31] will retry after 2.162137781s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:46.389923  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:29:46.389964  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:46.389984  525066 retry.go:31] will retry after 2.885458194s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:46.471167  525066 type.go:168] "Request Body" body=""
	I1212 00:29:46.471247  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:46.471565  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:46.971278  525066 type.go:168] "Request Body" body=""
	I1212 00:29:46.971393  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:46.971713  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:29:46.971800  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:29:47.471406  525066 type.go:168] "Request Body" body=""
	I1212 00:29:47.471481  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:47.471794  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:47.970503  525066 type.go:168] "Request Body" body=""
	I1212 00:29:47.970590  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:47.970978  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:48.470459  525066 type.go:168] "Request Body" body=""
	I1212 00:29:48.470563  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:48.470882  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:48.550228  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:29:48.609468  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:29:48.609564  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:48.609586  525066 retry.go:31] will retry after 5.142469671s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:48.970999  525066 type.go:168] "Request Body" body=""
	I1212 00:29:48.971081  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:48.971378  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:49.275822  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:29:49.338921  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:29:49.338964  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:49.338982  525066 retry.go:31] will retry after 3.130992497s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:49.471334  525066 type.go:168] "Request Body" body=""
	I1212 00:29:49.471407  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:49.471715  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:29:49.471774  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:29:49.970357  525066 type.go:168] "Request Body" body=""
	I1212 00:29:49.970428  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:49.970800  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:50.470449  525066 type.go:168] "Request Body" body=""
	I1212 00:29:50.470521  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:50.470885  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:50.970632  525066 type.go:168] "Request Body" body=""
	I1212 00:29:50.970736  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:50.971135  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:51.470850  525066 type.go:168] "Request Body" body=""
	I1212 00:29:51.470934  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:51.471301  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:51.971160  525066 type.go:168] "Request Body" body=""
	I1212 00:29:51.971232  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:51.971562  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:29:51.971629  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:29:52.470175  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:29:52.470342  525066 type.go:168] "Request Body" body=""
	I1212 00:29:52.470395  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:52.470704  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:52.525865  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:29:52.529169  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:52.529199  525066 retry.go:31] will retry after 5.202817608s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:52.970512  525066 type.go:168] "Request Body" body=""
	I1212 00:29:52.970577  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:52.970929  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:53.470488  525066 type.go:168] "Request Body" body=""
	I1212 00:29:53.470560  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:53.470915  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:53.752286  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:29:53.818071  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:29:53.818120  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:53.818138  525066 retry.go:31] will retry after 7.493688168s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:53.970432  525066 type.go:168] "Request Body" body=""
	I1212 00:29:53.970529  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:53.970820  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:54.470420  525066 type.go:168] "Request Body" body=""
	I1212 00:29:54.470487  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:54.470795  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:29:54.470851  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:29:54.970811  525066 type.go:168] "Request Body" body=""
	I1212 00:29:54.970890  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:54.971241  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:55.471081  525066 type.go:168] "Request Body" body=""
	I1212 00:29:55.471155  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:55.471463  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:55.971189  525066 type.go:168] "Request Body" body=""
	I1212 00:29:55.971259  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:55.971627  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:56.470367  525066 type.go:168] "Request Body" body=""
	I1212 00:29:56.470446  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:56.470766  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:56.970401  525066 type.go:168] "Request Body" body=""
	I1212 00:29:56.970473  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:56.970813  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:29:56.970885  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:29:57.470373  525066 type.go:168] "Request Body" body=""
	I1212 00:29:57.470446  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:57.470716  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:57.732201  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:29:57.788085  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:29:57.792139  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:57.792170  525066 retry.go:31] will retry after 6.658571386s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:57.970423  525066 type.go:168] "Request Body" body=""
	I1212 00:29:57.970495  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:57.970833  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:58.470545  525066 type.go:168] "Request Body" body=""
	I1212 00:29:58.470620  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:58.470971  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:58.970653  525066 type.go:168] "Request Body" body=""
	I1212 00:29:58.970748  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:58.971004  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:29:58.971063  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:29:59.470479  525066 type.go:168] "Request Body" body=""
	I1212 00:29:59.470553  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:59.470886  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:59.970903  525066 type.go:168] "Request Body" body=""
	I1212 00:29:59.970985  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:59.971299  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:00.470879  525066 type.go:168] "Request Body" body=""
	I1212 00:30:00.470978  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:00.471351  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:00.971213  525066 type.go:168] "Request Body" body=""
	I1212 00:30:00.971307  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:00.971736  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:00.971826  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:01.312112  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:30:01.378306  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:30:01.384542  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:30:01.384581  525066 retry.go:31] will retry after 9.383564416s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:30:01.470976  525066 type.go:168] "Request Body" body=""
	I1212 00:30:01.471119  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:01.471452  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:01.971252  525066 type.go:168] "Request Body" body=""
	I1212 00:30:01.971351  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:01.971665  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:02.470427  525066 type.go:168] "Request Body" body=""
	I1212 00:30:02.470507  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:02.470916  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:02.970616  525066 type.go:168] "Request Body" body=""
	I1212 00:30:02.970721  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:02.971066  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:03.470621  525066 type.go:168] "Request Body" body=""
	I1212 00:30:03.470716  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:03.470992  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:03.471037  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:03.970767  525066 type.go:168] "Request Body" body=""
	I1212 00:30:03.970845  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:03.971214  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:04.450915  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:30:04.471249  525066 type.go:168] "Request Body" body=""
	I1212 00:30:04.471318  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:04.471581  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:04.504992  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:30:04.508551  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:30:04.508584  525066 retry.go:31] will retry after 16.635241248s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:30:04.971271  525066 type.go:168] "Request Body" body=""
	I1212 00:30:04.971364  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:04.971628  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:05.470444  525066 type.go:168] "Request Body" body=""
	I1212 00:30:05.470516  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:05.470888  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:05.970469  525066 type.go:168] "Request Body" body=""
	I1212 00:30:05.970547  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:05.970907  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:05.970959  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:06.470384  525066 type.go:168] "Request Body" body=""
	I1212 00:30:06.470477  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:06.470800  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:06.970490  525066 type.go:168] "Request Body" body=""
	I1212 00:30:06.970569  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:06.970938  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:07.470503  525066 type.go:168] "Request Body" body=""
	I1212 00:30:07.470599  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:07.470930  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:07.970442  525066 type.go:168] "Request Body" body=""
	I1212 00:30:07.970519  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:07.970789  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:08.470442  525066 type.go:168] "Request Body" body=""
	I1212 00:30:08.470518  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:08.470850  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:08.470905  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:08.970456  525066 type.go:168] "Request Body" body=""
	I1212 00:30:08.970529  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:08.970891  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:09.470554  525066 type.go:168] "Request Body" body=""
	I1212 00:30:09.470632  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:09.470900  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:09.970929  525066 type.go:168] "Request Body" body=""
	I1212 00:30:09.971012  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:09.971327  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:10.470376  525066 type.go:168] "Request Body" body=""
	I1212 00:30:10.470457  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:10.470750  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:10.768281  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:30:10.825103  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:30:10.828984  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:30:10.829014  525066 retry.go:31] will retry after 8.149625317s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:30:10.971311  525066 type.go:168] "Request Body" body=""
	I1212 00:30:10.971379  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:10.971644  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:10.971683  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:11.470436  525066 type.go:168] "Request Body" body=""
	I1212 00:30:11.470511  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:11.470843  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:11.970432  525066 type.go:168] "Request Body" body=""
	I1212 00:30:11.970527  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:11.970846  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:12.470525  525066 type.go:168] "Request Body" body=""
	I1212 00:30:12.470603  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:12.470941  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:12.970791  525066 type.go:168] "Request Body" body=""
	I1212 00:30:12.970866  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:12.971205  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:13.470475  525066 type.go:168] "Request Body" body=""
	I1212 00:30:13.470560  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:13.470875  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:13.470931  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:13.970549  525066 type.go:168] "Request Body" body=""
	I1212 00:30:13.970621  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:13.970905  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:14.470469  525066 type.go:168] "Request Body" body=""
	I1212 00:30:14.470545  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:14.470911  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:14.970901  525066 type.go:168] "Request Body" body=""
	I1212 00:30:14.970980  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:14.971358  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:15.471006  525066 type.go:168] "Request Body" body=""
	I1212 00:30:15.471085  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:15.471350  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:15.471390  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:15.971171  525066 type.go:168] "Request Body" body=""
	I1212 00:30:15.971259  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:15.971595  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:16.471255  525066 type.go:168] "Request Body" body=""
	I1212 00:30:16.471330  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:16.471636  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:16.970382  525066 type.go:168] "Request Body" body=""
	I1212 00:30:16.970466  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:16.970768  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:17.470446  525066 type.go:168] "Request Body" body=""
	I1212 00:30:17.470517  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:17.470833  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:17.970436  525066 type.go:168] "Request Body" body=""
	I1212 00:30:17.970514  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:17.970839  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:17.970896  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:18.470516  525066 type.go:168] "Request Body" body=""
	I1212 00:30:18.470594  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:18.470885  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:18.970641  525066 type.go:168] "Request Body" body=""
	I1212 00:30:18.970763  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:18.971104  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:18.979423  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:30:19.044083  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:30:19.044119  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:30:19.044140  525066 retry.go:31] will retry after 30.537522265s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:30:19.470570  525066 type.go:168] "Request Body" body=""
	I1212 00:30:19.470653  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:19.471007  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:19.971048  525066 type.go:168] "Request Body" body=""
	I1212 00:30:19.971122  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:19.971389  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:19.971439  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:20.470412  525066 type.go:168] "Request Body" body=""
	I1212 00:30:20.470491  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:20.470835  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:20.970464  525066 type.go:168] "Request Body" body=""
	I1212 00:30:20.970544  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:20.970890  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:21.144446  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:30:21.207915  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:30:21.207964  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:30:21.207983  525066 retry.go:31] will retry after 20.295589284s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:30:21.471340  525066 type.go:168] "Request Body" body=""
	I1212 00:30:21.471410  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:21.471696  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:21.970430  525066 type.go:168] "Request Body" body=""
	I1212 00:30:21.970500  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:21.970808  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:22.470556  525066 type.go:168] "Request Body" body=""
	I1212 00:30:22.470633  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:22.470953  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:22.471006  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:22.970442  525066 type.go:168] "Request Body" body=""
	I1212 00:30:22.970508  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:22.970782  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:23.470501  525066 type.go:168] "Request Body" body=""
	I1212 00:30:23.470601  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:23.470922  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:23.970478  525066 type.go:168] "Request Body" body=""
	I1212 00:30:23.970553  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:23.970885  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:24.470551  525066 type.go:168] "Request Body" body=""
	I1212 00:30:24.470618  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:24.470920  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:24.971014  525066 type.go:168] "Request Body" body=""
	I1212 00:30:24.971090  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:24.971391  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:24.971444  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:25.471210  525066 type.go:168] "Request Body" body=""
	I1212 00:30:25.471284  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:25.471604  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:25.971349  525066 type.go:168] "Request Body" body=""
	I1212 00:30:25.971417  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:25.971673  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:26.470375  525066 type.go:168] "Request Body" body=""
	I1212 00:30:26.470450  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:26.470752  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:26.970468  525066 type.go:168] "Request Body" body=""
	I1212 00:30:26.970568  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:26.970900  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:27.470551  525066 type.go:168] "Request Body" body=""
	I1212 00:30:27.470632  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:27.470951  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:27.471009  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:27.970461  525066 type.go:168] "Request Body" body=""
	I1212 00:30:27.970535  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:27.970896  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:28.470447  525066 type.go:168] "Request Body" body=""
	I1212 00:30:28.470517  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:28.470841  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:28.970537  525066 type.go:168] "Request Body" body=""
	I1212 00:30:28.970615  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:28.970891  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:29.470450  525066 type.go:168] "Request Body" body=""
	I1212 00:30:29.470530  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:29.470908  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:29.970898  525066 type.go:168] "Request Body" body=""
	I1212 00:30:29.970970  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:29.971305  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:29.971361  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:30.470857  525066 type.go:168] "Request Body" body=""
	I1212 00:30:30.470924  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:30.471192  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:30.971054  525066 type.go:168] "Request Body" body=""
	I1212 00:30:30.971147  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:30.971476  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:31.471280  525066 type.go:168] "Request Body" body=""
	I1212 00:30:31.471354  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:31.471652  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:31.970396  525066 type.go:168] "Request Body" body=""
	I1212 00:30:31.970469  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:31.970748  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:32.470446  525066 type.go:168] "Request Body" body=""
	I1212 00:30:32.470524  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:32.470875  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:32.470929  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:32.970467  525066 type.go:168] "Request Body" body=""
	I1212 00:30:32.970548  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:32.970910  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:33.470548  525066 type.go:168] "Request Body" body=""
	I1212 00:30:33.470621  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:33.470958  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:33.970663  525066 type.go:168] "Request Body" body=""
	I1212 00:30:33.970768  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:33.971120  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:34.470641  525066 type.go:168] "Request Body" body=""
	I1212 00:30:34.470734  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:34.471055  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:34.471106  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:34.971029  525066 type.go:168] "Request Body" body=""
	I1212 00:30:34.971106  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:34.971362  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:35.471168  525066 type.go:168] "Request Body" body=""
	I1212 00:30:35.471237  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:35.471543  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:35.971213  525066 type.go:168] "Request Body" body=""
	I1212 00:30:35.971284  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:35.971613  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:36.471350  525066 type.go:168] "Request Body" body=""
	I1212 00:30:36.471428  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:36.471693  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:36.471739  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:36.970449  525066 type.go:168] "Request Body" body=""
	I1212 00:30:36.970521  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:36.970836  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:37.470446  525066 type.go:168] "Request Body" body=""
	I1212 00:30:37.470518  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:37.470867  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:37.970336  525066 type.go:168] "Request Body" body=""
	I1212 00:30:37.970408  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:37.970717  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:38.470440  525066 type.go:168] "Request Body" body=""
	I1212 00:30:38.470510  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:38.470841  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:38.970461  525066 type.go:168] "Request Body" body=""
	I1212 00:30:38.970541  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:38.970920  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:38.970990  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:39.470649  525066 type.go:168] "Request Body" body=""
	I1212 00:30:39.470739  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:39.471073  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:39.970912  525066 type.go:168] "Request Body" body=""
	I1212 00:30:39.970992  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:39.971332  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:40.471276  525066 type.go:168] "Request Body" body=""
	I1212 00:30:40.471354  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:40.471676  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:40.970403  525066 type.go:168] "Request Body" body=""
	I1212 00:30:40.970481  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:40.970820  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:41.470514  525066 type.go:168] "Request Body" body=""
	I1212 00:30:41.470595  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:41.470937  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:41.471004  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:41.504392  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:30:41.561180  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:30:41.564784  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:30:41.564819  525066 retry.go:31] will retry after 29.925155821s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:30:41.971369  525066 type.go:168] "Request Body" body=""
	I1212 00:30:41.971443  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:41.971817  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:42.470482  525066 type.go:168] "Request Body" body=""
	I1212 00:30:42.470569  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:42.470884  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:42.970674  525066 type.go:168] "Request Body" body=""
	I1212 00:30:42.970766  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:42.971092  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:43.470816  525066 type.go:168] "Request Body" body=""
	I1212 00:30:43.470887  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:43.471196  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:43.471261  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:43.970994  525066 type.go:168] "Request Body" body=""
	I1212 00:30:43.971095  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:43.971420  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:44.471076  525066 type.go:168] "Request Body" body=""
	I1212 00:30:44.471150  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:44.471470  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:44.971260  525066 type.go:168] "Request Body" body=""
	I1212 00:30:44.971332  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:44.971645  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:45.470349  525066 type.go:168] "Request Body" body=""
	I1212 00:30:45.470425  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:45.470820  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:45.970433  525066 type.go:168] "Request Body" body=""
	I1212 00:30:45.970513  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:45.970829  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:45.970886  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:46.470441  525066 type.go:168] "Request Body" body=""
	I1212 00:30:46.470539  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:46.470878  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:46.970390  525066 type.go:168] "Request Body" body=""
	I1212 00:30:46.970456  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:46.970764  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:47.470436  525066 type.go:168] "Request Body" body=""
	I1212 00:30:47.470515  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:47.470849  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:47.970601  525066 type.go:168] "Request Body" body=""
	I1212 00:30:47.970697  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:47.970992  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:47.971047  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:48.470367  525066 type.go:168] "Request Body" body=""
	I1212 00:30:48.470433  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:48.470671  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:48.970405  525066 type.go:168] "Request Body" body=""
	I1212 00:30:48.970490  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:48.970853  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:49.470483  525066 type.go:168] "Request Body" body=""
	I1212 00:30:49.470560  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:49.470858  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:49.582168  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:30:49.635241  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:30:49.638539  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:30:49.638564  525066 retry.go:31] will retry after 36.706436998s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:30:49.971245  525066 type.go:168] "Request Body" body=""
	I1212 00:30:49.971317  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:49.971579  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:49.971624  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:50.470498  525066 type.go:168] "Request Body" body=""
	I1212 00:30:50.470570  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:50.470924  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:50.970508  525066 type.go:168] "Request Body" body=""
	I1212 00:30:50.970583  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:50.970916  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:51.470404  525066 type.go:168] "Request Body" body=""
	I1212 00:30:51.470472  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:51.470752  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:51.970450  525066 type.go:168] "Request Body" body=""
	I1212 00:30:51.970531  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:51.970886  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:52.470591  525066 type.go:168] "Request Body" body=""
	I1212 00:30:52.470671  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:52.470990  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:52.471051  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:52.970388  525066 type.go:168] "Request Body" body=""
	I1212 00:30:52.970466  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:52.970738  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:53.470464  525066 type.go:168] "Request Body" body=""
	I1212 00:30:53.470534  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:53.470877  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:53.970467  525066 type.go:168] "Request Body" body=""
	I1212 00:30:53.970540  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:53.970875  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:54.470561  525066 type.go:168] "Request Body" body=""
	I1212 00:30:54.470625  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:54.470882  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:54.970766  525066 type.go:168] "Request Body" body=""
	I1212 00:30:54.970842  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:54.971159  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:54.971210  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:55.470744  525066 type.go:168] "Request Body" body=""
	I1212 00:30:55.470816  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:55.471136  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:55.970919  525066 type.go:168] "Request Body" body=""
	I1212 00:30:55.970989  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:55.971245  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:56.471021  525066 type.go:168] "Request Body" body=""
	I1212 00:30:56.471102  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:56.471459  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:56.971297  525066 type.go:168] "Request Body" body=""
	I1212 00:30:56.971380  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:56.971721  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:56.971777  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:57.470392  525066 type.go:168] "Request Body" body=""
	I1212 00:30:57.470466  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:57.470735  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:57.970426  525066 type.go:168] "Request Body" body=""
	I1212 00:30:57.970498  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:57.970846  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:58.470550  525066 type.go:168] "Request Body" body=""
	I1212 00:30:58.470627  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:58.470983  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:58.970661  525066 type.go:168] "Request Body" body=""
	I1212 00:30:58.970747  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:58.971040  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:59.470746  525066 type.go:168] "Request Body" body=""
	I1212 00:30:59.470824  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:59.471166  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:59.471220  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:59.970964  525066 type.go:168] "Request Body" body=""
	I1212 00:30:59.971041  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:59.971352  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:00.470404  525066 type.go:168] "Request Body" body=""
	I1212 00:31:00.470477  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:00.470773  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:00.970466  525066 type.go:168] "Request Body" body=""
	I1212 00:31:00.970543  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:00.970928  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:01.470645  525066 type.go:168] "Request Body" body=""
	I1212 00:31:01.470749  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:01.471096  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:01.970787  525066 type.go:168] "Request Body" body=""
	I1212 00:31:01.970856  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:01.971135  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:01.971178  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:02.470982  525066 type.go:168] "Request Body" body=""
	I1212 00:31:02.471077  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:02.471408  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:02.971193  525066 type.go:168] "Request Body" body=""
	I1212 00:31:02.971269  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:02.971592  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:03.471340  525066 type.go:168] "Request Body" body=""
	I1212 00:31:03.471407  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:03.471649  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:03.970329  525066 type.go:168] "Request Body" body=""
	I1212 00:31:03.970409  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:03.970755  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:04.470469  525066 type.go:168] "Request Body" body=""
	I1212 00:31:04.470553  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:04.470917  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:04.470977  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:04.971081  525066 type.go:168] "Request Body" body=""
	I1212 00:31:04.971152  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:04.971443  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:05.471286  525066 type.go:168] "Request Body" body=""
	I1212 00:31:05.471363  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:05.471677  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:05.970382  525066 type.go:168] "Request Body" body=""
	I1212 00:31:05.970464  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:05.970758  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:06.470420  525066 type.go:168] "Request Body" body=""
	I1212 00:31:06.470484  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:06.470788  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:06.970555  525066 type.go:168] "Request Body" body=""
	I1212 00:31:06.970636  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:06.970974  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:06.971042  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:07.470458  525066 type.go:168] "Request Body" body=""
	I1212 00:31:07.470530  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:07.470902  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:07.970448  525066 type.go:168] "Request Body" body=""
	I1212 00:31:07.970518  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:07.970835  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:08.470450  525066 type.go:168] "Request Body" body=""
	I1212 00:31:08.470521  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:08.470867  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:08.970558  525066 type.go:168] "Request Body" body=""
	I1212 00:31:08.970638  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:08.970976  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:09.470659  525066 type.go:168] "Request Body" body=""
	I1212 00:31:09.470748  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:09.471069  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:09.471163  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:09.971114  525066 type.go:168] "Request Body" body=""
	I1212 00:31:09.971187  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:09.971512  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:10.470533  525066 type.go:168] "Request Body" body=""
	I1212 00:31:10.470613  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:10.470969  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:10.970732  525066 type.go:168] "Request Body" body=""
	I1212 00:31:10.970807  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:10.971084  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:11.470443  525066 type.go:168] "Request Body" body=""
	I1212 00:31:11.470518  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:11.470882  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:11.491140  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:31:11.552135  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:31:11.552186  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:31:11.552275  525066 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1212 00:31:11.970638  525066 type.go:168] "Request Body" body=""
	I1212 00:31:11.970757  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:11.971089  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:11.971151  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:12.470532  525066 type.go:168] "Request Body" body=""
	I1212 00:31:12.470609  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:12.470899  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:12.970411  525066 type.go:168] "Request Body" body=""
	I1212 00:31:12.970504  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:12.970815  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:13.470503  525066 type.go:168] "Request Body" body=""
	I1212 00:31:13.470574  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:13.470924  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:13.970619  525066 type.go:168] "Request Body" body=""
	I1212 00:31:13.970706  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:13.970963  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:14.470738  525066 type.go:168] "Request Body" body=""
	I1212 00:31:14.470812  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:14.471133  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:14.471187  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:14.971146  525066 type.go:168] "Request Body" body=""
	I1212 00:31:14.971222  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:14.971541  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:15.471278  525066 type.go:168] "Request Body" body=""
	I1212 00:31:15.471365  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:15.471609  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:15.970340  525066 type.go:168] "Request Body" body=""
	I1212 00:31:15.970431  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:15.970804  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:16.470509  525066 type.go:168] "Request Body" body=""
	I1212 00:31:16.470591  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:16.470920  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:16.970406  525066 type.go:168] "Request Body" body=""
	I1212 00:31:16.970483  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:16.970790  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:16.970848  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:17.470447  525066 type.go:168] "Request Body" body=""
	I1212 00:31:17.470519  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:17.470842  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:17.970550  525066 type.go:168] "Request Body" body=""
	I1212 00:31:17.970627  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:17.970937  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:18.470378  525066 type.go:168] "Request Body" body=""
	I1212 00:31:18.470445  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:18.470742  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:18.970443  525066 type.go:168] "Request Body" body=""
	I1212 00:31:18.970523  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:18.970864  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:18.970923  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:19.470460  525066 type.go:168] "Request Body" body=""
	I1212 00:31:19.470532  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:19.470860  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:19.970827  525066 type.go:168] "Request Body" body=""
	I1212 00:31:19.970900  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:19.971156  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:20.471114  525066 type.go:168] "Request Body" body=""
	I1212 00:31:20.471192  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:20.471496  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:20.971299  525066 type.go:168] "Request Body" body=""
	I1212 00:31:20.971376  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:20.971723  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:20.971777  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:21.471362  525066 type.go:168] "Request Body" body=""
	I1212 00:31:21.471430  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:21.471729  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:21.970437  525066 type.go:168] "Request Body" body=""
	I1212 00:31:21.970510  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:21.970868  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:22.470577  525066 type.go:168] "Request Body" body=""
	I1212 00:31:22.470650  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:22.470985  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:22.970698  525066 type.go:168] "Request Body" body=""
	I1212 00:31:22.970765  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:22.971007  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:23.470436  525066 type.go:168] "Request Body" body=""
	I1212 00:31:23.470511  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:23.470861  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:23.470914  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:23.970561  525066 type.go:168] "Request Body" body=""
	I1212 00:31:23.970643  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:23.970973  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:24.470353  525066 type.go:168] "Request Body" body=""
	I1212 00:31:24.470466  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:24.470739  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:24.970663  525066 type.go:168] "Request Body" body=""
	I1212 00:31:24.970762  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:24.971091  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:25.470468  525066 type.go:168] "Request Body" body=""
	I1212 00:31:25.470542  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:25.470865  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:25.970393  525066 type.go:168] "Request Body" body=""
	I1212 00:31:25.970464  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:25.970807  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:25.970864  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:26.345425  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:31:26.402811  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:31:26.406955  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:31:26.407059  525066 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1212 00:31:26.410095  525066 out.go:179] * Enabled addons: 
	I1212 00:31:26.413891  525066 addons.go:530] duration metric: took 1m46.38627975s for enable addons: enabled=[]
	I1212 00:31:26.471160  525066 type.go:168] "Request Body" body=""
	I1212 00:31:26.471237  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:26.471562  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:26.971357  525066 type.go:168] "Request Body" body=""
	I1212 00:31:26.971432  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:26.971737  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:27.470432  525066 type.go:168] "Request Body" body=""
	I1212 00:31:27.470500  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:27.470799  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:27.970424  525066 type.go:168] "Request Body" body=""
	I1212 00:31:27.970502  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:27.970862  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:27.970917  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:28.470589  525066 type.go:168] "Request Body" body=""
	I1212 00:31:28.470667  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:28.470983  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:28.970654  525066 type.go:168] "Request Body" body=""
	I1212 00:31:28.970741  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:28.970990  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:29.470447  525066 type.go:168] "Request Body" body=""
	I1212 00:31:29.470518  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:29.470827  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:29.970750  525066 type.go:168] "Request Body" body=""
	I1212 00:31:29.970827  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:29.971160  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:29.971218  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:30.471043  525066 type.go:168] "Request Body" body=""
	I1212 00:31:30.471109  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:30.471376  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:30.971171  525066 type.go:168] "Request Body" body=""
	I1212 00:31:30.971241  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:30.971550  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:31.471358  525066 type.go:168] "Request Body" body=""
	I1212 00:31:31.471445  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:31.471839  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:31.970419  525066 type.go:168] "Request Body" body=""
	I1212 00:31:31.970484  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:31.970752  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:32.470476  525066 type.go:168] "Request Body" body=""
	I1212 00:31:32.470545  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:32.470896  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:32.470960  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:32.970646  525066 type.go:168] "Request Body" body=""
	I1212 00:31:32.970737  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:32.971068  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:33.470394  525066 type.go:168] "Request Body" body=""
	I1212 00:31:33.470464  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:33.470730  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:33.970441  525066 type.go:168] "Request Body" body=""
	I1212 00:31:33.970512  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:33.970887  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:34.470452  525066 type.go:168] "Request Body" body=""
	I1212 00:31:34.470528  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:34.471050  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:34.471101  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:34.971076  525066 type.go:168] "Request Body" body=""
	I1212 00:31:34.971150  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:34.971412  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:35.471345  525066 type.go:168] "Request Body" body=""
	I1212 00:31:35.471417  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:35.471701  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:35.970416  525066 type.go:168] "Request Body" body=""
	I1212 00:31:35.970491  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:35.970794  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:36.470413  525066 type.go:168] "Request Body" body=""
	I1212 00:31:36.470483  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:36.470801  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:36.970488  525066 type.go:168] "Request Body" body=""
	I1212 00:31:36.970578  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:36.970944  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:36.970998  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:37.470498  525066 type.go:168] "Request Body" body=""
	I1212 00:31:37.470572  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:37.470867  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:37.970414  525066 type.go:168] "Request Body" body=""
	I1212 00:31:37.970484  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:37.970840  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:38.470560  525066 type.go:168] "Request Body" body=""
	I1212 00:31:38.470640  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:38.470997  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:38.970458  525066 type.go:168] "Request Body" body=""
	I1212 00:31:38.970542  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:38.970875  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:39.470418  525066 type.go:168] "Request Body" body=""
	I1212 00:31:39.470483  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:39.470746  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:39.470792  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:39.970766  525066 type.go:168] "Request Body" body=""
	I1212 00:31:39.970840  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:39.971186  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:40.470676  525066 type.go:168] "Request Body" body=""
	I1212 00:31:40.470773  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:40.471187  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:40.970410  525066 type.go:168] "Request Body" body=""
	I1212 00:31:40.970498  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:40.970933  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:41.470798  525066 type.go:168] "Request Body" body=""
	I1212 00:31:41.470881  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:41.471270  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:41.471324  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:41.971113  525066 type.go:168] "Request Body" body=""
	I1212 00:31:41.971189  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:41.971540  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:42.471293  525066 type.go:168] "Request Body" body=""
	I1212 00:31:42.471364  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:42.471623  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:42.971377  525066 type.go:168] "Request Body" body=""
	I1212 00:31:42.971447  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:42.971777  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:43.470484  525066 type.go:168] "Request Body" body=""
	I1212 00:31:43.470559  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:43.470898  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:43.970565  525066 type.go:168] "Request Body" body=""
	I1212 00:31:43.970635  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:43.970946  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:43.970995  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:44.470714  525066 type.go:168] "Request Body" body=""
	I1212 00:31:44.470786  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:44.471067  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:44.970961  525066 type.go:168] "Request Body" body=""
	I1212 00:31:44.971037  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:44.971349  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:45.471086  525066 type.go:168] "Request Body" body=""
	I1212 00:31:45.471160  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:45.471425  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:45.971255  525066 type.go:168] "Request Body" body=""
	I1212 00:31:45.971369  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:45.971732  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:45.971788  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:46.470449  525066 type.go:168] "Request Body" body=""
	I1212 00:31:46.470522  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:46.470852  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:46.970385  525066 type.go:168] "Request Body" body=""
	I1212 00:31:46.970452  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:46.970722  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:47.470437  525066 type.go:168] "Request Body" body=""
	I1212 00:31:47.470516  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:47.471054  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:47.970787  525066 type.go:168] "Request Body" body=""
	I1212 00:31:47.970859  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:47.971237  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:48.470392  525066 type.go:168] "Request Body" body=""
	I1212 00:31:48.470462  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:48.470738  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:48.470782  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:48.970447  525066 type.go:168] "Request Body" body=""
	I1212 00:31:48.970518  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:48.970858  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:49.470546  525066 type.go:168] "Request Body" body=""
	I1212 00:31:49.470624  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:49.470988  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:49.970873  525066 type.go:168] "Request Body" body=""
	I1212 00:31:49.970945  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:49.971253  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:50.471037  525066 type.go:168] "Request Body" body=""
	I1212 00:31:50.471108  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:50.471396  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:50.471445  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:50.971208  525066 type.go:168] "Request Body" body=""
	I1212 00:31:50.971282  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:50.971603  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:51.471213  525066 type.go:168] "Request Body" body=""
	I1212 00:31:51.471279  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:51.471540  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:51.971366  525066 type.go:168] "Request Body" body=""
	I1212 00:31:51.971439  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:51.971745  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:52.470473  525066 type.go:168] "Request Body" body=""
	I1212 00:31:52.470565  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:52.470989  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:52.970405  525066 type.go:168] "Request Body" body=""
	I1212 00:31:52.970478  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:52.970761  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:52.970811  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:53.470446  525066 type.go:168] "Request Body" body=""
	I1212 00:31:53.470521  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:53.470872  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:53.970497  525066 type.go:168] "Request Body" body=""
	I1212 00:31:53.970576  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:53.970925  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:54.470437  525066 type.go:168] "Request Body" body=""
	I1212 00:31:54.470505  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:54.470810  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:54.970825  525066 type.go:168] "Request Body" body=""
	I1212 00:31:54.970901  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:54.971247  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:54.971305  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:55.471027  525066 type.go:168] "Request Body" body=""
	I1212 00:31:55.471109  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:55.471438  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:55.971085  525066 type.go:168] "Request Body" body=""
	I1212 00:31:55.971149  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:55.971395  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:56.471224  525066 type.go:168] "Request Body" body=""
	I1212 00:31:56.471307  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:56.471633  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:56.970393  525066 type.go:168] "Request Body" body=""
	I1212 00:31:56.970474  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:56.970791  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:57.470420  525066 type.go:168] "Request Body" body=""
	I1212 00:31:57.470487  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:57.470757  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:57.470801  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:57.970446  525066 type.go:168] "Request Body" body=""
	I1212 00:31:57.970526  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:57.970833  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:58.470459  525066 type.go:168] "Request Body" body=""
	I1212 00:31:58.470532  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:58.470885  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:58.970572  525066 type.go:168] "Request Body" body=""
	I1212 00:31:58.970646  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:58.970920  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:59.470458  525066 type.go:168] "Request Body" body=""
	I1212 00:31:59.470548  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:59.470889  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:59.470948  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:59.970930  525066 type.go:168] "Request Body" body=""
	I1212 00:31:59.971026  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:59.971389  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:00.470941  525066 type.go:168] "Request Body" body=""
	I1212 00:32:00.471069  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:00.471359  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:00.971229  525066 type.go:168] "Request Body" body=""
	I1212 00:32:00.971307  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:00.971647  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:01.470368  525066 type.go:168] "Request Body" body=""
	I1212 00:32:01.470445  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:01.470795  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:01.970448  525066 type.go:168] "Request Body" body=""
	I1212 00:32:01.970521  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:01.970816  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:01.970862  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:02.470549  525066 type.go:168] "Request Body" body=""
	I1212 00:32:02.470624  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:02.470998  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:02.970773  525066 type.go:168] "Request Body" body=""
	I1212 00:32:02.970858  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:02.971312  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:03.471100  525066 type.go:168] "Request Body" body=""
	I1212 00:32:03.471172  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:03.471473  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:03.971278  525066 type.go:168] "Request Body" body=""
	I1212 00:32:03.971356  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:03.971686  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:03.971737  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:04.470407  525066 type.go:168] "Request Body" body=""
	I1212 00:32:04.470491  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:04.470843  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:04.970701  525066 type.go:168] "Request Body" body=""
	I1212 00:32:04.970771  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:04.971049  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:05.470747  525066 type.go:168] "Request Body" body=""
	I1212 00:32:05.470824  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:05.471189  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:05.970759  525066 type.go:168] "Request Body" body=""
	I1212 00:32:05.970838  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:05.971177  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:06.470915  525066 type.go:168] "Request Body" body=""
	I1212 00:32:06.470997  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:06.471253  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:06.471294  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:06.971057  525066 type.go:168] "Request Body" body=""
	I1212 00:32:06.971134  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:06.971488  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:07.471269  525066 type.go:168] "Request Body" body=""
	I1212 00:32:07.471344  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:07.471670  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:07.970352  525066 type.go:168] "Request Body" body=""
	I1212 00:32:07.970421  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:07.970747  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:08.470438  525066 type.go:168] "Request Body" body=""
	I1212 00:32:08.470509  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:08.470878  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:08.970449  525066 type.go:168] "Request Body" body=""
	I1212 00:32:08.970521  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:08.970871  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:08.970925  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:09.470391  525066 type.go:168] "Request Body" body=""
	I1212 00:32:09.470470  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:09.470756  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:09.970703  525066 type.go:168] "Request Body" body=""
	I1212 00:32:09.970779  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:09.971116  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:10.471033  525066 type.go:168] "Request Body" body=""
	I1212 00:32:10.471109  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:10.471417  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:10.971169  525066 type.go:168] "Request Body" body=""
	I1212 00:32:10.971238  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:10.971496  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:10.971539  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:11.471372  525066 type.go:168] "Request Body" body=""
	I1212 00:32:11.471451  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:11.471770  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:11.970469  525066 type.go:168] "Request Body" body=""
	I1212 00:32:11.970586  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:11.970898  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:12.470383  525066 type.go:168] "Request Body" body=""
	I1212 00:32:12.470453  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:12.470788  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:12.970473  525066 type.go:168] "Request Body" body=""
	I1212 00:32:12.970545  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:12.970889  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:13.470464  525066 type.go:168] "Request Body" body=""
	I1212 00:32:13.470555  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:13.470934  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:13.470994  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:13.970655  525066 type.go:168] "Request Body" body=""
	I1212 00:32:13.970754  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:13.971092  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:14.470457  525066 type.go:168] "Request Body" body=""
	I1212 00:32:14.470538  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:14.470903  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:14.970794  525066 type.go:168] "Request Body" body=""
	I1212 00:32:14.970878  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:14.971205  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:15.470971  525066 type.go:168] "Request Body" body=""
	I1212 00:32:15.471055  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:15.471372  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:15.471414  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:15.971237  525066 type.go:168] "Request Body" body=""
	I1212 00:32:15.971320  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:15.971640  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:16.470370  525066 type.go:168] "Request Body" body=""
	I1212 00:32:16.470445  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:16.470782  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:16.970410  525066 type.go:168] "Request Body" body=""
	I1212 00:32:16.970484  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:16.970823  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:17.470439  525066 type.go:168] "Request Body" body=""
	I1212 00:32:17.470517  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:17.470864  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:17.970590  525066 type.go:168] "Request Body" body=""
	I1212 00:32:17.970664  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:17.971024  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:17.971078  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:18.470738  525066 type.go:168] "Request Body" body=""
	I1212 00:32:18.470805  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:18.471184  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:18.971020  525066 type.go:168] "Request Body" body=""
	I1212 00:32:18.971105  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:18.971458  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:19.471116  525066 type.go:168] "Request Body" body=""
	I1212 00:32:19.471188  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:19.471515  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:19.970337  525066 type.go:168] "Request Body" body=""
	I1212 00:32:19.970412  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:19.970828  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:20.471213  525066 type.go:168] "Request Body" body=""
	I1212 00:32:20.471293  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:20.471629  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:20.471692  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:20.970398  525066 type.go:168] "Request Body" body=""
	I1212 00:32:20.970472  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:20.970846  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:21.470529  525066 type.go:168] "Request Body" body=""
	I1212 00:32:21.470596  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:21.470869  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:21.970458  525066 type.go:168] "Request Body" body=""
	I1212 00:32:21.970531  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:21.970901  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:22.470489  525066 type.go:168] "Request Body" body=""
	I1212 00:32:22.470578  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:22.470923  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:22.970628  525066 type.go:168] "Request Body" body=""
	I1212 00:32:22.970719  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:22.970989  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:22.971032  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:23.470448  525066 type.go:168] "Request Body" body=""
	I1212 00:32:23.470535  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:23.470894  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:23.970458  525066 type.go:168] "Request Body" body=""
	I1212 00:32:23.970535  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:23.970896  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:24.470332  525066 type.go:168] "Request Body" body=""
	I1212 00:32:24.470407  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:24.470716  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:24.970747  525066 type.go:168] "Request Body" body=""
	I1212 00:32:24.970820  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:24.971165  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:24.971223  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:25.471031  525066 type.go:168] "Request Body" body=""
	I1212 00:32:25.471128  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:25.471490  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:25.971208  525066 type.go:168] "Request Body" body=""
	I1212 00:32:25.971275  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:25.971541  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:26.471290  525066 type.go:168] "Request Body" body=""
	I1212 00:32:26.471368  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:26.471700  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:26.970438  525066 type.go:168] "Request Body" body=""
	I1212 00:32:26.970517  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:26.970866  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:27.470424  525066 type.go:168] "Request Body" body=""
	I1212 00:32:27.470499  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:27.470866  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:27.470914  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:27.970455  525066 type.go:168] "Request Body" body=""
	I1212 00:32:27.970540  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:27.970913  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:28.470649  525066 type.go:168] "Request Body" body=""
	I1212 00:32:28.470738  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:28.471036  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:28.970429  525066 type.go:168] "Request Body" body=""
	I1212 00:32:28.970500  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:28.970820  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:29.470446  525066 type.go:168] "Request Body" body=""
	I1212 00:32:29.470523  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:29.470857  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:29.970808  525066 type.go:168] "Request Body" body=""
	I1212 00:32:29.970884  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:29.971262  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:29.971318  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:30.471202  525066 type.go:168] "Request Body" body=""
	I1212 00:32:30.471270  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:30.471570  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:30.971249  525066 type.go:168] "Request Body" body=""
	I1212 00:32:30.971333  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:30.971675  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:31.470379  525066 type.go:168] "Request Body" body=""
	I1212 00:32:31.470456  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:31.470795  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:31.970429  525066 type.go:168] "Request Body" body=""
	I1212 00:32:31.970500  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:31.970830  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:32.470463  525066 type.go:168] "Request Body" body=""
	I1212 00:32:32.470546  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:32.470916  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:32.470969  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:32.970507  525066 type.go:168] "Request Body" body=""
	I1212 00:32:32.970586  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:32.970962  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:33.470645  525066 type.go:168] "Request Body" body=""
	I1212 00:32:33.470725  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:33.471027  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:33.970436  525066 type.go:168] "Request Body" body=""
	I1212 00:32:33.970515  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:33.970871  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:34.470554  525066 type.go:168] "Request Body" body=""
	I1212 00:32:34.470623  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:34.470962  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:34.471031  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:34.970790  525066 type.go:168] "Request Body" body=""
	I1212 00:32:34.970868  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:34.971195  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:35.470980  525066 type.go:168] "Request Body" body=""
	I1212 00:32:35.471052  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:35.471397  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:35.971176  525066 type.go:168] "Request Body" body=""
	I1212 00:32:35.971249  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:35.971579  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:36.471345  525066 type.go:168] "Request Body" body=""
	I1212 00:32:36.471420  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:36.471668  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:36.471709  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:36.970372  525066 type.go:168] "Request Body" body=""
	I1212 00:32:36.970448  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:36.970769  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:37.470433  525066 type.go:168] "Request Body" body=""
	I1212 00:32:37.470509  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:37.470827  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:37.970393  525066 type.go:168] "Request Body" body=""
	I1212 00:32:37.970466  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:37.970813  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:38.470425  525066 type.go:168] "Request Body" body=""
	I1212 00:32:38.470496  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:38.470842  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:38.970540  525066 type.go:168] "Request Body" body=""
	I1212 00:32:38.970620  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:38.970960  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:38.971034  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:39.470693  525066 type.go:168] "Request Body" body=""
	I1212 00:32:39.470760  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:39.471016  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:39.970946  525066 type.go:168] "Request Body" body=""
	I1212 00:32:39.971029  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:39.971356  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:40.470492  525066 type.go:168] "Request Body" body=""
	I1212 00:32:40.470569  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:40.470930  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:40.970610  525066 type.go:168] "Request Body" body=""
	I1212 00:32:40.970675  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:40.970950  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:41.470442  525066 type.go:168] "Request Body" body=""
	I1212 00:32:41.470517  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:41.470854  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:41.470914  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:41.970457  525066 type.go:168] "Request Body" body=""
	I1212 00:32:41.970529  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:41.970877  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:42.470405  525066 type.go:168] "Request Body" body=""
	I1212 00:32:42.470478  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:42.470764  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:42.970423  525066 type.go:168] "Request Body" body=""
	I1212 00:32:42.970491  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:42.970824  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:43.470427  525066 type.go:168] "Request Body" body=""
	I1212 00:32:43.470499  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:43.470854  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:43.970397  525066 type.go:168] "Request Body" body=""
	I1212 00:32:43.970461  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:43.970746  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:43.970790  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:44.470423  525066 type.go:168] "Request Body" body=""
	I1212 00:32:44.470501  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:44.470842  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:44.970773  525066 type.go:168] "Request Body" body=""
	I1212 00:32:44.970846  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:44.971174  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:45.470779  525066 type.go:168] "Request Body" body=""
	I1212 00:32:45.470852  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:45.471113  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:45.970429  525066 type.go:168] "Request Body" body=""
	I1212 00:32:45.970512  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:45.970857  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:45.970913  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:46.470599  525066 type.go:168] "Request Body" body=""
	I1212 00:32:46.470698  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:46.471036  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:46.970724  525066 type.go:168] "Request Body" body=""
	I1212 00:32:46.970811  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:46.971101  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:47.470415  525066 type.go:168] "Request Body" body=""
	I1212 00:32:47.470487  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:47.470829  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:47.970458  525066 type.go:168] "Request Body" body=""
	I1212 00:32:47.970529  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:47.970877  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:47.970934  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:48.470399  525066 type.go:168] "Request Body" body=""
	I1212 00:32:48.470468  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:48.470756  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:48.970465  525066 type.go:168] "Request Body" body=""
	I1212 00:32:48.970543  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:48.970922  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:49.470651  525066 type.go:168] "Request Body" body=""
	I1212 00:32:49.470742  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:49.471098  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:49.970876  525066 type.go:168] "Request Body" body=""
	I1212 00:32:49.970959  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:49.971229  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:49.971270  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:50.471244  525066 type.go:168] "Request Body" body=""
	I1212 00:32:50.471322  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:50.471670  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:50.970390  525066 type.go:168] "Request Body" body=""
	I1212 00:32:50.970471  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:50.970817  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:51.470505  525066 type.go:168] "Request Body" body=""
	I1212 00:32:51.470577  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:51.470954  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:51.970430  525066 type.go:168] "Request Body" body=""
	I1212 00:32:51.970500  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:51.970854  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:52.470564  525066 type.go:168] "Request Body" body=""
	I1212 00:32:52.470637  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:52.471003  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:52.471056  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:52.970380  525066 type.go:168] "Request Body" body=""
	I1212 00:32:52.970448  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:52.970779  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:53.470481  525066 type.go:168] "Request Body" body=""
	I1212 00:32:53.470554  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:53.470926  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:53.970647  525066 type.go:168] "Request Body" body=""
	I1212 00:32:53.970737  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:53.971090  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:54.471319  525066 type.go:168] "Request Body" body=""
	I1212 00:32:54.471392  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:54.471642  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:54.471682  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:54.970626  525066 type.go:168] "Request Body" body=""
	I1212 00:32:54.970705  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:54.971020  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:55.470451  525066 type.go:168] "Request Body" body=""
	I1212 00:32:55.470522  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:55.470854  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:55.970384  525066 type.go:168] "Request Body" body=""
	I1212 00:32:55.970461  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:55.970755  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:56.470437  525066 type.go:168] "Request Body" body=""
	I1212 00:32:56.470521  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:56.470867  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:56.970577  525066 type.go:168] "Request Body" body=""
	I1212 00:32:56.970650  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:56.971023  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:56.971077  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:57.470742  525066 type.go:168] "Request Body" body=""
	I1212 00:32:57.470815  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:57.471167  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:57.970872  525066 type.go:168] "Request Body" body=""
	I1212 00:32:57.970953  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:57.971280  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:58.471062  525066 type.go:168] "Request Body" body=""
	I1212 00:32:58.471134  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:58.471462  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:58.971255  525066 type.go:168] "Request Body" body=""
	I1212 00:32:58.971322  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:58.971578  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:58.971620  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:59.471341  525066 type.go:168] "Request Body" body=""
	I1212 00:32:59.471410  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:59.471735  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:59.970614  525066 type.go:168] "Request Body" body=""
	I1212 00:32:59.970716  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:59.971048  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:00.470331  525066 type.go:168] "Request Body" body=""
	I1212 00:33:00.470413  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:00.470671  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:00.970395  525066 type.go:168] "Request Body" body=""
	I1212 00:33:00.970485  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:00.970885  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:01.470464  525066 type.go:168] "Request Body" body=""
	I1212 00:33:01.470537  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:01.470879  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:01.470943  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:01.970438  525066 type.go:168] "Request Body" body=""
	I1212 00:33:01.970506  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:01.970852  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:02.470619  525066 type.go:168] "Request Body" body=""
	I1212 00:33:02.470714  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:02.471075  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:02.970791  525066 type.go:168] "Request Body" body=""
	I1212 00:33:02.970863  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:02.971208  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:03.470951  525066 type.go:168] "Request Body" body=""
	I1212 00:33:03.471027  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:03.471358  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:03.471426  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:03.971137  525066 type.go:168] "Request Body" body=""
	I1212 00:33:03.971208  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:03.971542  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:04.471345  525066 type.go:168] "Request Body" body=""
	I1212 00:33:04.471415  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:04.471746  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:04.970402  525066 type.go:168] "Request Body" body=""
	I1212 00:33:04.970479  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:04.970766  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:05.470439  525066 type.go:168] "Request Body" body=""
	I1212 00:33:05.470511  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:05.470849  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:05.970564  525066 type.go:168] "Request Body" body=""
	I1212 00:33:05.970637  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:05.970984  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:05.971040  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:06.470424  525066 type.go:168] "Request Body" body=""
	I1212 00:33:06.470502  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:06.470781  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:06.970465  525066 type.go:168] "Request Body" body=""
	I1212 00:33:06.970547  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:06.970898  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:07.470461  525066 type.go:168] "Request Body" body=""
	I1212 00:33:07.470546  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:07.470897  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:07.970572  525066 type.go:168] "Request Body" body=""
	I1212 00:33:07.970648  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:07.970982  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:08.470661  525066 type.go:168] "Request Body" body=""
	I1212 00:33:08.470757  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:08.471101  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:08.471155  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:08.970834  525066 type.go:168] "Request Body" body=""
	I1212 00:33:08.970915  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:08.971261  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:09.471007  525066 type.go:168] "Request Body" body=""
	I1212 00:33:09.471080  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:09.471383  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:09.971222  525066 type.go:168] "Request Body" body=""
	I1212 00:33:09.971292  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:09.971613  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:10.470464  525066 type.go:168] "Request Body" body=""
	I1212 00:33:10.470536  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:10.470871  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:10.970418  525066 type.go:168] "Request Body" body=""
	I1212 00:33:10.970483  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:10.970802  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:10.970858  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:11.470473  525066 type.go:168] "Request Body" body=""
	I1212 00:33:11.470545  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:11.470900  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:11.970438  525066 type.go:168] "Request Body" body=""
	I1212 00:33:11.970517  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:11.970868  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:12.470560  525066 type.go:168] "Request Body" body=""
	I1212 00:33:12.470632  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:12.470928  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:12.970459  525066 type.go:168] "Request Body" body=""
	I1212 00:33:12.970538  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:12.970885  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:12.970943  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:13.470422  525066 type.go:168] "Request Body" body=""
	I1212 00:33:13.470501  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:13.470840  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:13.970429  525066 type.go:168] "Request Body" body=""
	I1212 00:33:13.970498  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:13.970796  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:14.470442  525066 type.go:168] "Request Body" body=""
	I1212 00:33:14.470515  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:14.470869  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:14.970766  525066 type.go:168] "Request Body" body=""
	I1212 00:33:14.970842  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:14.971184  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:14.971238  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:15.470940  525066 type.go:168] "Request Body" body=""
	I1212 00:33:15.471011  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:15.471271  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:15.971043  525066 type.go:168] "Request Body" body=""
	I1212 00:33:15.971115  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:15.971480  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:16.471281  525066 type.go:168] "Request Body" body=""
	I1212 00:33:16.471357  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:16.471735  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:16.970410  525066 type.go:168] "Request Body" body=""
	I1212 00:33:16.970477  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:16.970775  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:17.470446  525066 type.go:168] "Request Body" body=""
	I1212 00:33:17.470524  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:17.470842  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:17.470887  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:17.970551  525066 type.go:168] "Request Body" body=""
	I1212 00:33:17.970623  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:17.970977  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:18.470481  525066 type.go:168] "Request Body" body=""
	I1212 00:33:18.470557  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:18.470882  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:18.970446  525066 type.go:168] "Request Body" body=""
	I1212 00:33:18.970516  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:18.970881  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:19.470588  525066 type.go:168] "Request Body" body=""
	I1212 00:33:19.470670  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:19.471039  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:19.471096  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:19.970916  525066 type.go:168] "Request Body" body=""
	I1212 00:33:19.971010  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:19.971330  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:20.471261  525066 type.go:168] "Request Body" body=""
	I1212 00:33:20.471340  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:20.471686  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:20.970404  525066 type.go:168] "Request Body" body=""
	I1212 00:33:20.970481  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:20.970812  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:21.470406  525066 type.go:168] "Request Body" body=""
	I1212 00:33:21.470472  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:21.470744  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:21.970441  525066 type.go:168] "Request Body" body=""
	I1212 00:33:21.970514  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:21.970842  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:21.970902  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:22.470458  525066 type.go:168] "Request Body" body=""
	I1212 00:33:22.470541  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:22.470874  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:22.970430  525066 type.go:168] "Request Body" body=""
	I1212 00:33:22.970499  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:22.970829  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:23.470435  525066 type.go:168] "Request Body" body=""
	I1212 00:33:23.470504  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:23.470830  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:23.970480  525066 type.go:168] "Request Body" body=""
	I1212 00:33:23.970555  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:23.970905  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:23.970965  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:24.470433  525066 type.go:168] "Request Body" body=""
	I1212 00:33:24.470506  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:24.470783  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:24.970803  525066 type.go:168] "Request Body" body=""
	I1212 00:33:24.970886  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:24.971251  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:25.471119  525066 type.go:168] "Request Body" body=""
	I1212 00:33:25.471192  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:25.471497  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:25.971209  525066 type.go:168] "Request Body" body=""
	I1212 00:33:25.971285  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:25.971580  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:25.971621  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:26.470369  525066 type.go:168] "Request Body" body=""
	I1212 00:33:26.470445  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:26.470776  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:26.970409  525066 type.go:168] "Request Body" body=""
	I1212 00:33:26.970481  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:26.970791  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:27.470405  525066 type.go:168] "Request Body" body=""
	I1212 00:33:27.470473  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:27.470753  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:27.970463  525066 type.go:168] "Request Body" body=""
	I1212 00:33:27.970537  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:27.970900  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:28.470443  525066 type.go:168] "Request Body" body=""
	I1212 00:33:28.470515  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:28.470860  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:28.470912  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:28.970541  525066 type.go:168] "Request Body" body=""
	I1212 00:33:28.970608  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:28.970887  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:29.470489  525066 type.go:168] "Request Body" body=""
	I1212 00:33:29.470563  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:29.470897  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:29.970848  525066 type.go:168] "Request Body" body=""
	I1212 00:33:29.970931  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:29.971286  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:30.470592  525066 type.go:168] "Request Body" body=""
	I1212 00:33:30.470659  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:30.470963  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:30.471010  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:30.970450  525066 type.go:168] "Request Body" body=""
	I1212 00:33:30.970524  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:30.970868  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:31.470588  525066 type.go:168] "Request Body" body=""
	I1212 00:33:31.470666  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:31.471003  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:31.970391  525066 type.go:168] "Request Body" body=""
	I1212 00:33:31.970461  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:31.970788  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:32.470477  525066 type.go:168] "Request Body" body=""
	I1212 00:33:32.470550  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:32.470900  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:32.970483  525066 type.go:168] "Request Body" body=""
	I1212 00:33:32.970557  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:32.970910  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:32.970970  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:33.470618  525066 type.go:168] "Request Body" body=""
	I1212 00:33:33.470712  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:33.470974  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:33.970444  525066 type.go:168] "Request Body" body=""
	I1212 00:33:33.970517  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:33.970882  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:34.470466  525066 type.go:168] "Request Body" body=""
	I1212 00:33:34.470544  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:34.470888  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:34.974811  525066 type.go:168] "Request Body" body=""
	I1212 00:33:34.974888  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:34.975210  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:34.975263  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:35.470393  525066 type.go:168] "Request Body" body=""
	I1212 00:33:35.470475  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:35.470774  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:35.970447  525066 type.go:168] "Request Body" body=""
	I1212 00:33:35.970520  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:35.970921  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:36.470630  525066 type.go:168] "Request Body" body=""
	I1212 00:33:36.470714  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:36.470977  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:36.970447  525066 type.go:168] "Request Body" body=""
	I1212 00:33:36.970519  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:36.970841  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:37.470461  525066 type.go:168] "Request Body" body=""
	I1212 00:33:37.470545  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:37.470921  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:37.470982  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:37.970408  525066 type.go:168] "Request Body" body=""
	I1212 00:33:37.970475  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:37.970748  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:38.470525  525066 type.go:168] "Request Body" body=""
	I1212 00:33:38.470598  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:38.470966  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:38.970662  525066 type.go:168] "Request Body" body=""
	I1212 00:33:38.970757  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:38.971088  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:39.470794  525066 type.go:168] "Request Body" body=""
	I1212 00:33:39.470866  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:39.471135  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:39.471185  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:39.971164  525066 type.go:168] "Request Body" body=""
	I1212 00:33:39.971246  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:39.971584  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:40.470516  525066 type.go:168] "Request Body" body=""
	I1212 00:33:40.470591  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:40.470924  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:40.970415  525066 type.go:168] "Request Body" body=""
	I1212 00:33:40.970485  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:40.971060  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:41.470451  525066 type.go:168] "Request Body" body=""
	I1212 00:33:41.470587  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:41.470946  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:41.970903  525066 type.go:168] "Request Body" body=""
	I1212 00:33:41.970980  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:41.971291  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:41.971344  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:42.471094  525066 type.go:168] "Request Body" body=""
	I1212 00:33:42.471178  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:42.471572  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:42.971249  525066 type.go:168] "Request Body" body=""
	I1212 00:33:42.971320  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:42.971651  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:43.470400  525066 type.go:168] "Request Body" body=""
	I1212 00:33:43.470476  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:43.470790  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:43.970434  525066 type.go:168] "Request Body" body=""
	I1212 00:33:43.970503  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:43.970849  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:44.470442  525066 type.go:168] "Request Body" body=""
	I1212 00:33:44.470512  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:44.470828  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:44.470882  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:44.970788  525066 type.go:168] "Request Body" body=""
	I1212 00:33:44.970861  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:44.971179  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:45.471027  525066 type.go:168] "Request Body" body=""
	I1212 00:33:45.471095  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:45.471384  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:45.971175  525066 type.go:168] "Request Body" body=""
	I1212 00:33:45.971254  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:45.971602  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:46.471390  525066 type.go:168] "Request Body" body=""
	I1212 00:33:46.471469  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:46.471785  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:46.471843  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:46.970475  525066 type.go:168] "Request Body" body=""
	I1212 00:33:46.970546  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:46.970906  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:47.470431  525066 type.go:168] "Request Body" body=""
	I1212 00:33:47.470507  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:47.470868  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:47.970663  525066 type.go:168] "Request Body" body=""
	I1212 00:33:47.970759  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:47.971097  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:48.470801  525066 type.go:168] "Request Body" body=""
	I1212 00:33:48.470875  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:48.471136  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:48.970447  525066 type.go:168] "Request Body" body=""
	I1212 00:33:48.970518  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:48.970883  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:48.970943  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:49.470484  525066 type.go:168] "Request Body" body=""
	I1212 00:33:49.470558  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:49.470901  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:49.970711  525066 type.go:168] "Request Body" body=""
	I1212 00:33:49.970783  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:49.971100  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:50.471121  525066 type.go:168] "Request Body" body=""
	I1212 00:33:50.471191  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:50.471492  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:50.971285  525066 type.go:168] "Request Body" body=""
	I1212 00:33:50.971368  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:50.971704  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:50.971758  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:51.470388  525066 type.go:168] "Request Body" body=""
	I1212 00:33:51.470461  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:51.470790  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:51.970459  525066 type.go:168] "Request Body" body=""
	I1212 00:33:51.970536  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:51.970878  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:52.470599  525066 type.go:168] "Request Body" body=""
	I1212 00:33:52.470668  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:52.471040  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:52.970519  525066 type.go:168] "Request Body" body=""
	I1212 00:33:52.970595  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:52.970943  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:53.470479  525066 type.go:168] "Request Body" body=""
	I1212 00:33:53.470557  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:53.470898  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:53.470998  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:53.970658  525066 type.go:168] "Request Body" body=""
	I1212 00:33:53.970752  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:53.971087  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:54.470382  525066 type.go:168] "Request Body" body=""
	I1212 00:33:54.470475  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:54.470745  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:54.971392  525066 type.go:168] "Request Body" body=""
	I1212 00:33:54.971460  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:54.971785  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:55.470484  525066 type.go:168] "Request Body" body=""
	I1212 00:33:55.470559  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:55.470901  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:55.970438  525066 type.go:168] "Request Body" body=""
	I1212 00:33:55.970506  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:55.970773  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:55.970819  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:56.470532  525066 type.go:168] "Request Body" body=""
	I1212 00:33:56.470620  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:56.470993  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:56.970746  525066 type.go:168] "Request Body" body=""
	I1212 00:33:56.970823  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:56.971164  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:57.470798  525066 type.go:168] "Request Body" body=""
	I1212 00:33:57.470887  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:57.471201  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:57.970983  525066 type.go:168] "Request Body" body=""
	I1212 00:33:57.971057  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:57.971379  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:57.971435  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:58.471288  525066 type.go:168] "Request Body" body=""
	I1212 00:33:58.471374  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:58.471710  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:58.970417  525066 type.go:168] "Request Body" body=""
	I1212 00:33:58.970485  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:58.970786  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:59.470436  525066 type.go:168] "Request Body" body=""
	I1212 00:33:59.470537  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:59.470835  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:59.970792  525066 type.go:168] "Request Body" body=""
	I1212 00:33:59.970864  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:59.971190  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:00.471100  525066 type.go:168] "Request Body" body=""
	I1212 00:34:00.471178  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:00.471446  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:00.471491  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:00.971317  525066 type.go:168] "Request Body" body=""
	I1212 00:34:00.971392  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:00.971704  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:01.470405  525066 type.go:168] "Request Body" body=""
	I1212 00:34:01.470478  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:01.470824  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:01.970499  525066 type.go:168] "Request Body" body=""
	I1212 00:34:01.970587  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:01.970878  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:02.470450  525066 type.go:168] "Request Body" body=""
	I1212 00:34:02.470548  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:02.470875  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:02.970486  525066 type.go:168] "Request Body" body=""
	I1212 00:34:02.970558  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:02.970903  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:02.970960  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:03.470466  525066 type.go:168] "Request Body" body=""
	I1212 00:34:03.470543  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:03.470886  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:03.970447  525066 type.go:168] "Request Body" body=""
	I1212 00:34:03.970517  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:03.970835  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:04.470526  525066 type.go:168] "Request Body" body=""
	I1212 00:34:04.470601  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:04.470936  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:04.970883  525066 type.go:168] "Request Body" body=""
	I1212 00:34:04.970968  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:04.971228  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:04.971278  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:05.471031  525066 type.go:168] "Request Body" body=""
	I1212 00:34:05.471105  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:05.471416  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:05.971158  525066 type.go:168] "Request Body" body=""
	I1212 00:34:05.971232  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:05.971554  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:06.471186  525066 type.go:168] "Request Body" body=""
	I1212 00:34:06.471254  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:06.471579  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:06.971380  525066 type.go:168] "Request Body" body=""
	I1212 00:34:06.971454  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:06.971795  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:06.971845  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:07.470453  525066 type.go:168] "Request Body" body=""
	I1212 00:34:07.470532  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:07.470891  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:07.970570  525066 type.go:168] "Request Body" body=""
	I1212 00:34:07.970640  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:07.970974  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:08.470454  525066 type.go:168] "Request Body" body=""
	I1212 00:34:08.470526  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:08.470855  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:08.970451  525066 type.go:168] "Request Body" body=""
	I1212 00:34:08.970523  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:08.970873  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:09.470384  525066 type.go:168] "Request Body" body=""
	I1212 00:34:09.470463  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:09.470759  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:09.470810  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:09.970476  525066 type.go:168] "Request Body" body=""
	I1212 00:34:09.970548  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:09.970914  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:10.470339  525066 type.go:168] "Request Body" body=""
	I1212 00:34:10.470412  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:10.470749  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:10.970418  525066 type.go:168] "Request Body" body=""
	I1212 00:34:10.970489  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:10.970837  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:11.470433  525066 type.go:168] "Request Body" body=""
	I1212 00:34:11.470511  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:11.470863  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:11.470914  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:11.970593  525066 type.go:168] "Request Body" body=""
	I1212 00:34:11.970672  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:11.971074  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:12.470513  525066 type.go:168] "Request Body" body=""
	I1212 00:34:12.470580  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:12.470938  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:12.970456  525066 type.go:168] "Request Body" body=""
	I1212 00:34:12.970537  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:12.970882  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:13.470574  525066 type.go:168] "Request Body" body=""
	I1212 00:34:13.470650  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:13.471032  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:13.471090  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:13.970419  525066 type.go:168] "Request Body" body=""
	I1212 00:34:13.970498  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:13.970791  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:14.470474  525066 type.go:168] "Request Body" body=""
	I1212 00:34:14.470543  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:14.470845  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:14.970750  525066 type.go:168] "Request Body" body=""
	I1212 00:34:14.970824  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:14.971143  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:15.470788  525066 type.go:168] "Request Body" body=""
	I1212 00:34:15.470856  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:15.471125  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:15.471166  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:15.970468  525066 type.go:168] "Request Body" body=""
	I1212 00:34:15.970542  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:15.970859  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:16.470484  525066 type.go:168] "Request Body" body=""
	I1212 00:34:16.470555  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:16.470882  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:16.970423  525066 type.go:168] "Request Body" body=""
	I1212 00:34:16.970496  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:16.970812  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:17.470487  525066 type.go:168] "Request Body" body=""
	I1212 00:34:17.470558  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:17.470975  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:17.970713  525066 type.go:168] "Request Body" body=""
	I1212 00:34:17.970796  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:17.971146  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:17.971201  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:18.470794  525066 type.go:168] "Request Body" body=""
	I1212 00:34:18.470865  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:18.471131  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:18.970459  525066 type.go:168] "Request Body" body=""
	I1212 00:34:18.970539  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:18.970892  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:19.470617  525066 type.go:168] "Request Body" body=""
	I1212 00:34:19.470700  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:19.471001  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:19.970879  525066 type.go:168] "Request Body" body=""
	I1212 00:34:19.970960  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:19.971231  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:19.971282  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:20.470446  525066 type.go:168] "Request Body" body=""
	I1212 00:34:20.470523  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:20.470854  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:20.970561  525066 type.go:168] "Request Body" body=""
	I1212 00:34:20.970634  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:20.970964  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:21.470548  525066 type.go:168] "Request Body" body=""
	I1212 00:34:21.470625  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:21.470970  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:21.970472  525066 type.go:168] "Request Body" body=""
	I1212 00:34:21.970545  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:21.970883  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:22.470450  525066 type.go:168] "Request Body" body=""
	I1212 00:34:22.470526  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:22.470863  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:22.470920  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:22.970409  525066 type.go:168] "Request Body" body=""
	I1212 00:34:22.970483  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:22.970826  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:23.470427  525066 type.go:168] "Request Body" body=""
	I1212 00:34:23.470503  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:23.470827  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:23.970523  525066 type.go:168] "Request Body" body=""
	I1212 00:34:23.970600  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:23.970934  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:24.470590  525066 type.go:168] "Request Body" body=""
	I1212 00:34:24.470656  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:24.470938  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:24.470979  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:24.971061  525066 type.go:168] "Request Body" body=""
	I1212 00:34:24.971133  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:24.971465  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:25.471283  525066 type.go:168] "Request Body" body=""
	I1212 00:34:25.471358  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:25.471678  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:25.970368  525066 type.go:168] "Request Body" body=""
	I1212 00:34:25.970434  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:25.970734  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:26.470429  525066 type.go:168] "Request Body" body=""
	I1212 00:34:26.470509  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:26.470892  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:26.970602  525066 type.go:168] "Request Body" body=""
	I1212 00:34:26.970714  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:26.971045  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:26.971097  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:27.470596  525066 type.go:168] "Request Body" body=""
	I1212 00:34:27.470664  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:27.470983  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:27.970426  525066 type.go:168] "Request Body" body=""
	I1212 00:34:27.970519  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:27.970812  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:28.470515  525066 type.go:168] "Request Body" body=""
	I1212 00:34:28.470605  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:28.470981  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:28.970522  525066 type.go:168] "Request Body" body=""
	I1212 00:34:28.970588  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:28.970857  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:29.470468  525066 type.go:168] "Request Body" body=""
	I1212 00:34:29.470544  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:29.470864  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:29.470919  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:29.970798  525066 type.go:168] "Request Body" body=""
	I1212 00:34:29.970869  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:29.971213  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:30.471137  525066 type.go:168] "Request Body" body=""
	I1212 00:34:30.471225  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:30.471550  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:30.971357  525066 type.go:168] "Request Body" body=""
	I1212 00:34:30.971431  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:30.971742  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:31.470472  525066 type.go:168] "Request Body" body=""
	I1212 00:34:31.470560  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:31.470967  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:31.471019  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:31.970675  525066 type.go:168] "Request Body" body=""
	I1212 00:34:31.970764  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:31.971052  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:32.470453  525066 type.go:168] "Request Body" body=""
	I1212 00:34:32.470527  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:32.470874  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:32.970593  525066 type.go:168] "Request Body" body=""
	I1212 00:34:32.970672  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:32.971032  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:33.470510  525066 type.go:168] "Request Body" body=""
	I1212 00:34:33.470583  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:33.470858  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:33.970552  525066 type.go:168] "Request Body" body=""
	I1212 00:34:33.970631  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:33.970999  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:33.971054  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:34.470581  525066 type.go:168] "Request Body" body=""
	I1212 00:34:34.470663  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:34.471029  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:34.970856  525066 type.go:168] "Request Body" body=""
	I1212 00:34:34.970934  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:34.971203  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:35.470956  525066 type.go:168] "Request Body" body=""
	I1212 00:34:35.471029  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:35.471364  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:35.971153  525066 type.go:168] "Request Body" body=""
	I1212 00:34:35.971231  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:35.971540  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:35.971597  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:36.471328  525066 type.go:168] "Request Body" body=""
	I1212 00:34:36.471400  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:36.471724  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:36.970408  525066 type.go:168] "Request Body" body=""
	I1212 00:34:36.970487  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:36.970849  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:37.470460  525066 type.go:168] "Request Body" body=""
	I1212 00:34:37.470535  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:37.470898  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:37.970425  525066 type.go:168] "Request Body" body=""
	I1212 00:34:37.970536  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:37.970843  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:38.470574  525066 type.go:168] "Request Body" body=""
	I1212 00:34:38.470652  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:38.470997  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:38.471051  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:38.970762  525066 type.go:168] "Request Body" body=""
	I1212 00:34:38.970837  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:38.971165  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:39.470821  525066 type.go:168] "Request Body" body=""
	I1212 00:34:39.470925  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:39.471276  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:39.971132  525066 type.go:168] "Request Body" body=""
	I1212 00:34:39.971208  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:39.971504  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:40.470403  525066 type.go:168] "Request Body" body=""
	I1212 00:34:40.470483  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:40.470859  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:40.970437  525066 type.go:168] "Request Body" body=""
	I1212 00:34:40.970509  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:40.970780  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:40.970828  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:41.470472  525066 type.go:168] "Request Body" body=""
	I1212 00:34:41.470541  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:41.471156  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:41.970992  525066 type.go:168] "Request Body" body=""
	I1212 00:34:41.971063  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:41.971400  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:42.471111  525066 type.go:168] "Request Body" body=""
	I1212 00:34:42.471182  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:42.471437  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:42.971249  525066 type.go:168] "Request Body" body=""
	I1212 00:34:42.971318  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:42.971637  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:42.971693  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:43.470373  525066 type.go:168] "Request Body" body=""
	I1212 00:34:43.470452  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:43.470770  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:43.970406  525066 type.go:168] "Request Body" body=""
	I1212 00:34:43.970484  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:43.970813  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:44.470444  525066 type.go:168] "Request Body" body=""
	I1212 00:34:44.470522  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:44.470871  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:44.970718  525066 type.go:168] "Request Body" body=""
	I1212 00:34:44.970796  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:44.971129  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:45.470791  525066 type.go:168] "Request Body" body=""
	I1212 00:34:45.470857  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:45.471157  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:45.471208  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:45.971058  525066 type.go:168] "Request Body" body=""
	I1212 00:34:45.971144  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:45.971575  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:46.471360  525066 type.go:168] "Request Body" body=""
	I1212 00:34:46.471430  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:46.471804  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:46.970483  525066 type.go:168] "Request Body" body=""
	I1212 00:34:46.970548  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:46.970854  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:47.470504  525066 type.go:168] "Request Body" body=""
	I1212 00:34:47.470579  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:47.470919  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:47.970616  525066 type.go:168] "Request Body" body=""
	I1212 00:34:47.970715  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:47.971061  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:47.971117  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:48.470377  525066 type.go:168] "Request Body" body=""
	I1212 00:34:48.470445  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:48.470744  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:48.970454  525066 type.go:168] "Request Body" body=""
	I1212 00:34:48.970525  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:48.970871  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:49.470443  525066 type.go:168] "Request Body" body=""
	I1212 00:34:49.470517  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:49.470842  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:49.970842  525066 type.go:168] "Request Body" body=""
	I1212 00:34:49.970921  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:49.971185  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:49.971235  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:50.471289  525066 type.go:168] "Request Body" body=""
	I1212 00:34:50.471363  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:50.471683  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:50.970385  525066 type.go:168] "Request Body" body=""
	I1212 00:34:50.970470  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:50.970820  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:51.470498  525066 type.go:168] "Request Body" body=""
	I1212 00:34:51.470577  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:51.470901  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:51.970643  525066 type.go:168] "Request Body" body=""
	I1212 00:34:51.970737  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:51.971081  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:52.470467  525066 type.go:168] "Request Body" body=""
	I1212 00:34:52.470543  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:52.470852  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:52.470906  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:52.970347  525066 type.go:168] "Request Body" body=""
	I1212 00:34:52.970413  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:52.970656  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:53.470361  525066 type.go:168] "Request Body" body=""
	I1212 00:34:53.470433  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:53.470758  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:53.970445  525066 type.go:168] "Request Body" body=""
	I1212 00:34:53.970521  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:53.970846  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:54.470385  525066 type.go:168] "Request Body" body=""
	I1212 00:34:54.470456  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:54.470728  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:54.970731  525066 type.go:168] "Request Body" body=""
	I1212 00:34:54.970808  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:54.971141  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:54.971194  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:55.470962  525066 type.go:168] "Request Body" body=""
	I1212 00:34:55.471032  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:55.471384  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:55.971164  525066 type.go:168] "Request Body" body=""
	I1212 00:34:55.971235  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:55.971578  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:56.471360  525066 type.go:168] "Request Body" body=""
	I1212 00:34:56.471429  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:56.471744  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:56.970433  525066 type.go:168] "Request Body" body=""
	I1212 00:34:56.970514  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:56.970841  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:57.470322  525066 type.go:168] "Request Body" body=""
	I1212 00:34:57.470393  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:57.470705  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:57.470754  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:57.970446  525066 type.go:168] "Request Body" body=""
	I1212 00:34:57.970517  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:57.970859  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:58.470584  525066 type.go:168] "Request Body" body=""
	I1212 00:34:58.470658  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:58.470977  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:58.970390  525066 type.go:168] "Request Body" body=""
	I1212 00:34:58.970459  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:58.970753  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:59.470435  525066 type.go:168] "Request Body" body=""
	I1212 00:34:59.470506  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:59.470847  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:59.470910  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:59.970877  525066 type.go:168] "Request Body" body=""
	I1212 00:34:59.970974  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:59.971302  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:00.471231  525066 type.go:168] "Request Body" body=""
	I1212 00:35:00.471314  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:00.471616  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:00.970336  525066 type.go:168] "Request Body" body=""
	I1212 00:35:00.970416  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:00.970781  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:01.470511  525066 type.go:168] "Request Body" body=""
	I1212 00:35:01.470600  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:01.470948  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:35:01.471002  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:35:01.970437  525066 type.go:168] "Request Body" body=""
	I1212 00:35:01.970523  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:01.970847  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:02.470535  525066 type.go:168] "Request Body" body=""
	I1212 00:35:02.470612  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:02.470975  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:02.970668  525066 type.go:168] "Request Body" body=""
	I1212 00:35:02.970763  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:02.971094  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:03.470416  525066 type.go:168] "Request Body" body=""
	I1212 00:35:03.470482  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:03.470744  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:03.970472  525066 type.go:168] "Request Body" body=""
	I1212 00:35:03.970553  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:03.970942  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:35:03.971000  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:35:04.470451  525066 type.go:168] "Request Body" body=""
	I1212 00:35:04.470558  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:04.470919  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:04.970804  525066 type.go:168] "Request Body" body=""
	I1212 00:35:04.970871  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:04.971144  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:05.470467  525066 type.go:168] "Request Body" body=""
	I1212 00:35:05.470537  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:05.470942  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:05.970499  525066 type.go:168] "Request Body" body=""
	I1212 00:35:05.970585  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:05.970938  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:06.470332  525066 type.go:168] "Request Body" body=""
	I1212 00:35:06.470408  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:06.470730  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:35:06.470781  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:35:06.970454  525066 type.go:168] "Request Body" body=""
	I1212 00:35:06.970525  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:06.970921  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:07.470487  525066 type.go:168] "Request Body" body=""
	I1212 00:35:07.470570  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:07.470917  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:07.970583  525066 type.go:168] "Request Body" body=""
	I1212 00:35:07.970653  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:07.970985  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:08.470669  525066 type.go:168] "Request Body" body=""
	I1212 00:35:08.470768  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:08.471111  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:35:08.471164  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:35:08.970674  525066 type.go:168] "Request Body" body=""
	I1212 00:35:08.970770  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:08.971117  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:09.470791  525066 type.go:168] "Request Body" body=""
	I1212 00:35:09.470928  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:09.471187  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:09.971168  525066 type.go:168] "Request Body" body=""
	I1212 00:35:09.971242  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:09.971558  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:10.470457  525066 type.go:168] "Request Body" body=""
	I1212 00:35:10.470566  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:10.470928  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:10.970414  525066 type.go:168] "Request Body" body=""
	I1212 00:35:10.970483  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:10.970810  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:35:10.970861  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:35:11.470561  525066 type.go:168] "Request Body" body=""
	I1212 00:35:11.470643  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:11.471038  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:11.970830  525066 type.go:168] "Request Body" body=""
	I1212 00:35:11.970907  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:11.971183  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:12.470991  525066 type.go:168] "Request Body" body=""
	I1212 00:35:12.471059  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:12.471390  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:12.971182  525066 type.go:168] "Request Body" body=""
	I1212 00:35:12.971260  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:12.971601  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:35:12.971680  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:35:13.471284  525066 type.go:168] "Request Body" body=""
	I1212 00:35:13.471356  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:13.471730  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:13.970398  525066 type.go:168] "Request Body" body=""
	I1212 00:35:13.970477  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:13.970795  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:14.470449  525066 type.go:168] "Request Body" body=""
	I1212 00:35:14.470529  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:14.470838  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:14.970781  525066 type.go:168] "Request Body" body=""
	I1212 00:35:14.970875  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:14.971268  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:15.471033  525066 type.go:168] "Request Body" body=""
	I1212 00:35:15.471104  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:15.471367  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:35:15.471407  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:35:15.971146  525066 type.go:168] "Request Body" body=""
	I1212 00:35:15.971216  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:15.971526  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:16.471298  525066 type.go:168] "Request Body" body=""
	I1212 00:35:16.471376  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:16.471748  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:16.970401  525066 type.go:168] "Request Body" body=""
	I1212 00:35:16.970468  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:16.970761  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:17.470431  525066 type.go:168] "Request Body" body=""
	I1212 00:35:17.470501  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:17.470854  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:17.970433  525066 type.go:168] "Request Body" body=""
	I1212 00:35:17.970504  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:17.970880  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:35:17.970936  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:35:18.470421  525066 type.go:168] "Request Body" body=""
	I1212 00:35:18.470491  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:18.470768  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:18.970447  525066 type.go:168] "Request Body" body=""
	I1212 00:35:18.970526  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:18.970872  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:19.470463  525066 type.go:168] "Request Body" body=""
	I1212 00:35:19.470546  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:19.470905  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:19.970857  525066 type.go:168] "Request Body" body=""
	I1212 00:35:19.970930  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:19.971189  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:35:19.971235  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:35:20.471222  525066 type.go:168] "Request Body" body=""
	I1212 00:35:20.471296  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:20.471592  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:20.971375  525066 type.go:168] "Request Body" body=""
	I1212 00:35:20.971447  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:20.971753  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:21.470418  525066 type.go:168] "Request Body" body=""
	I1212 00:35:21.470490  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:21.470805  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:21.970405  525066 type.go:168] "Request Body" body=""
	I1212 00:35:21.970484  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:21.970793  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:22.470400  525066 type.go:168] "Request Body" body=""
	I1212 00:35:22.470486  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:22.470834  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:35:22.470893  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:35:22.970432  525066 type.go:168] "Request Body" body=""
	I1212 00:35:22.970507  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:22.970864  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:23.470610  525066 type.go:168] "Request Body" body=""
	I1212 00:35:23.470694  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:23.471022  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:23.970472  525066 type.go:168] "Request Body" body=""
	I1212 00:35:23.970544  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:23.970934  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:24.470526  525066 type.go:168] "Request Body" body=""
	I1212 00:35:24.470602  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:24.470885  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:35:24.470937  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:35:24.970814  525066 type.go:168] "Request Body" body=""
	I1212 00:35:24.970900  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:24.971212  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:25.470981  525066 type.go:168] "Request Body" body=""
	I1212 00:35:25.471083  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:25.471412  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:25.971186  525066 type.go:168] "Request Body" body=""
	I1212 00:35:25.971270  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:25.971542  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:26.471296  525066 type.go:168] "Request Body" body=""
	I1212 00:35:26.471372  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:26.471691  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:35:26.471748  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:35:26.970425  525066 type.go:168] "Request Body" body=""
	I1212 00:35:26.970494  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:26.970788  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:27.470473  525066 type.go:168] "Request Body" body=""
	I1212 00:35:27.470545  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:27.470900  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:27.970608  525066 type.go:168] "Request Body" body=""
	I1212 00:35:27.970694  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:27.971049  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:28.470775  525066 type.go:168] "Request Body" body=""
	I1212 00:35:28.470856  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:28.471187  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:28.970958  525066 type.go:168] "Request Body" body=""
	I1212 00:35:28.971022  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:28.971277  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:35:28.971316  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:35:29.471162  525066 type.go:168] "Request Body" body=""
	I1212 00:35:29.471240  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:29.471593  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:29.970376  525066 type.go:168] "Request Body" body=""
	I1212 00:35:29.970454  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:29.970816  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:30.471109  525066 type.go:168] "Request Body" body=""
	I1212 00:35:30.471183  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:30.471480  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:30.971287  525066 type.go:168] "Request Body" body=""
	I1212 00:35:30.971360  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:30.971672  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:35:30.971729  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:35:31.470405  525066 type.go:168] "Request Body" body=""
	I1212 00:35:31.470485  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:31.470830  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:31.970549  525066 type.go:168] "Request Body" body=""
	I1212 00:35:31.970619  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:31.970957  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:32.470649  525066 type.go:168] "Request Body" body=""
	I1212 00:35:32.470745  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:32.471093  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:32.970460  525066 type.go:168] "Request Body" body=""
	I1212 00:35:32.970533  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:32.970861  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:33.470443  525066 type.go:168] "Request Body" body=""
	I1212 00:35:33.470511  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:33.470783  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:35:33.470825  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:35:33.970454  525066 type.go:168] "Request Body" body=""
	I1212 00:35:33.970535  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:33.970883  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:34.470595  525066 type.go:168] "Request Body" body=""
	I1212 00:35:34.470673  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:34.471021  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:34.970778  525066 type.go:168] "Request Body" body=""
	I1212 00:35:34.970845  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:34.971108  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:35.470789  525066 type.go:168] "Request Body" body=""
	I1212 00:35:35.470893  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:35.471408  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:35:35.471455  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:35:35.971178  525066 type.go:168] "Request Body" body=""
	I1212 00:35:35.971249  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:35.971545  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:36.471287  525066 type.go:168] "Request Body" body=""
	I1212 00:35:36.471358  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:36.471623  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:36.970386  525066 type.go:168] "Request Body" body=""
	I1212 00:35:36.970468  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:36.970815  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:37.470527  525066 type.go:168] "Request Body" body=""
	I1212 00:35:37.470612  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:37.470950  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:37.970440  525066 type.go:168] "Request Body" body=""
	I1212 00:35:37.970510  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:37.970824  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:35:37.970880  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:35:38.470422  525066 type.go:168] "Request Body" body=""
	I1212 00:35:38.470503  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:38.470828  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:38.970459  525066 type.go:168] "Request Body" body=""
	I1212 00:35:38.970529  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:38.970885  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:39.470567  525066 type.go:168] "Request Body" body=""
	I1212 00:35:39.470634  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:39.470915  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:39.971016  525066 type.go:168] "Request Body" body=""
	I1212 00:35:39.971090  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:39.971449  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:35:39.971507  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:35:40.470458  525066 type.go:168] "Request Body" body=""
	I1212 00:35:40.470535  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:40.470907  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:40.970388  525066 type.go:168] "Request Body" body=""
	I1212 00:35:40.970449  525066 node_ready.go:38] duration metric: took 6m0.000230679s for node "functional-035643" to be "Ready" ...
	I1212 00:35:40.973928  525066 out.go:203] 
	W1212 00:35:40.976747  525066 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1212 00:35:40.976773  525066 out.go:285] * 
	W1212 00:35:40.981440  525066 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 00:35:40.984739  525066 out.go:203] 
	
	
	==> CRI-O <==
	Dec 12 00:29:38 functional-035643 crio[5335]: time="2025-12-12T00:29:38.034109304Z" level=info msg="Using the internal default seccomp profile"
	Dec 12 00:29:38 functional-035643 crio[5335]: time="2025-12-12T00:29:38.034177635Z" level=info msg="AppArmor is disabled by the system or at CRI-O build-time"
	Dec 12 00:29:38 functional-035643 crio[5335]: time="2025-12-12T00:29:38.03422949Z" level=info msg="No blockio config file specified, blockio not configured"
	Dec 12 00:29:38 functional-035643 crio[5335]: time="2025-12-12T00:29:38.034284676Z" level=info msg="RDT not available in the host system"
	Dec 12 00:29:38 functional-035643 crio[5335]: time="2025-12-12T00:29:38.034352408Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 12 00:29:38 functional-035643 crio[5335]: time="2025-12-12T00:29:38.035250323Z" level=info msg="Conmon does support the --sync option"
	Dec 12 00:29:38 functional-035643 crio[5335]: time="2025-12-12T00:29:38.035364265Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 12 00:29:38 functional-035643 crio[5335]: time="2025-12-12T00:29:38.035430742Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 12 00:29:38 functional-035643 crio[5335]: time="2025-12-12T00:29:38.036202564Z" level=info msg="Conmon does support the --sync option"
	Dec 12 00:29:38 functional-035643 crio[5335]: time="2025-12-12T00:29:38.036293393Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 12 00:29:38 functional-035643 crio[5335]: time="2025-12-12T00:29:38.03648849Z" level=info msg="Updated default CNI network name to "
	Dec 12 00:29:38 functional-035643 crio[5335]: time="2025-12-12T00:29:38.037130698Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oc
i/hooks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"cgroupfs\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n
uid_mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_
memory = \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_d
ir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [c
rio.nri]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 12 00:29:38 functional-035643 crio[5335]: time="2025-12-12T00:29:38.037740481Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 12 00:29:38 functional-035643 crio[5335]: time="2025-12-12T00:29:38.037879998Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 12 00:29:38 functional-035643 crio[5335]: time="2025-12-12T00:29:38.09225968Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 12 00:29:38 functional-035643 crio[5335]: time="2025-12-12T00:29:38.09231892Z" level=info msg="Starting seccomp notifier watcher"
	Dec 12 00:29:38 functional-035643 crio[5335]: time="2025-12-12T00:29:38.092389244Z" level=info msg="Create NRI interface"
	Dec 12 00:29:38 functional-035643 crio[5335]: time="2025-12-12T00:29:38.092551029Z" level=info msg="built-in NRI default validator is disabled"
	Dec 12 00:29:38 functional-035643 crio[5335]: time="2025-12-12T00:29:38.092568366Z" level=info msg="runtime interface created"
	Dec 12 00:29:38 functional-035643 crio[5335]: time="2025-12-12T00:29:38.092583759Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 12 00:29:38 functional-035643 crio[5335]: time="2025-12-12T00:29:38.092594753Z" level=info msg="runtime interface starting up..."
	Dec 12 00:29:38 functional-035643 crio[5335]: time="2025-12-12T00:29:38.092601407Z" level=info msg="starting plugins..."
	Dec 12 00:29:38 functional-035643 crio[5335]: time="2025-12-12T00:29:38.092616291Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 12 00:29:38 functional-035643 crio[5335]: time="2025-12-12T00:29:38.092695756Z" level=info msg="No systemd watchdog enabled"
	Dec 12 00:29:38 functional-035643 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:35:45.774540    8701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:35:45.775038    8701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:35:45.776707    8701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:35:45.777114    8701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:35:45.778596    8701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec11 23:45] hrtimer: interrupt took 13740716 ns
	[Dec12 00:10] kauditd_printk_skb: 8 callbacks suppressed
	[Dec12 00:11] overlayfs: idmapped layers are currently not supported
	[  +0.124336] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec12 00:17] overlayfs: idmapped layers are currently not supported
	[Dec12 00:18] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 00:35:45 up  3:18,  0 user,  load average: 0.28, 0.31, 0.79
	Linux functional-035643 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 12 00:35:43 functional-035643 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 00:35:43 functional-035643 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1142.
	Dec 12 00:35:43 functional-035643 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:35:43 functional-035643 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:35:44 functional-035643 kubelet[8580]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 00:35:44 functional-035643 kubelet[8580]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 00:35:44 functional-035643 kubelet[8580]: E1212 00:35:44.043809    8580 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 00:35:44 functional-035643 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 00:35:44 functional-035643 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 00:35:44 functional-035643 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1143.
	Dec 12 00:35:44 functional-035643 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:35:44 functional-035643 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:35:44 functional-035643 kubelet[8615]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 00:35:44 functional-035643 kubelet[8615]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 00:35:44 functional-035643 kubelet[8615]: E1212 00:35:44.789224    8615 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 00:35:44 functional-035643 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 00:35:44 functional-035643 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 00:35:45 functional-035643 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1144.
	Dec 12 00:35:45 functional-035643 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:35:45 functional-035643 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:35:45 functional-035643 kubelet[8633]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 00:35:45 functional-035643 kubelet[8633]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 00:35:45 functional-035643 kubelet[8633]: E1212 00:35:45.534200    8633 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 00:35:45 functional-035643 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 00:35:45 functional-035643 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-035643 -n functional-035643
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-035643 -n functional-035643: exit status 2 (403.928076ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-035643" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (2.53s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (2.35s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-035643 kubectl -- --context functional-035643 get pods
functional_test.go:731: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-035643 kubectl -- --context functional-035643 get pods: exit status 1 (101.165383ms)

                                                
                                                
** stderr ** 
	E1212 00:35:53.928876  530332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1212 00:35:53.929202  530332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1212 00:35:53.930597  530332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1212 00:35:53.930951  530332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1212 00:35:53.932376  530332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:734: failed to get pods. args "out/minikube-linux-arm64 -p functional-035643 kubectl -- --context functional-035643 get pods": exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-035643
helpers_test.go:244: (dbg) docker inspect functional-035643:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "02b8c8e636a5f9c9930bd279101be257c50eb00805c3c8fd0285e10206ff115a",
	        "Created": "2025-12-12T00:21:16.539894649Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 519641,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T00:21:16.600605162Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/02b8c8e636a5f9c9930bd279101be257c50eb00805c3c8fd0285e10206ff115a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/02b8c8e636a5f9c9930bd279101be257c50eb00805c3c8fd0285e10206ff115a/hostname",
	        "HostsPath": "/var/lib/docker/containers/02b8c8e636a5f9c9930bd279101be257c50eb00805c3c8fd0285e10206ff115a/hosts",
	        "LogPath": "/var/lib/docker/containers/02b8c8e636a5f9c9930bd279101be257c50eb00805c3c8fd0285e10206ff115a/02b8c8e636a5f9c9930bd279101be257c50eb00805c3c8fd0285e10206ff115a-json.log",
	        "Name": "/functional-035643",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-035643:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-035643",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "02b8c8e636a5f9c9930bd279101be257c50eb00805c3c8fd0285e10206ff115a",
	                "LowerDir": "/var/lib/docker/overlay2/fac4f7ad025123b29aaa4f718217dd9d72b542fc9949b8afc2ffc326afc38542-init/diff:/var/lib/docker/overlay2/312acdcca8c5c90ada236fa0dd866f841348e5b8485928af37d3628cccc20197/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fac4f7ad025123b29aaa4f718217dd9d72b542fc9949b8afc2ffc326afc38542/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fac4f7ad025123b29aaa4f718217dd9d72b542fc9949b8afc2ffc326afc38542/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fac4f7ad025123b29aaa4f718217dd9d72b542fc9949b8afc2ffc326afc38542/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-035643",
	                "Source": "/var/lib/docker/volumes/functional-035643/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-035643",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-035643",
	                "name.minikube.sigs.k8s.io": "functional-035643",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ede6a17442d6bf83b8f4c9f93f252345cec3d0406f82de2d6bd2cfd4713e2163",
	            "SandboxKey": "/var/run/docker/netns/ede6a17442d6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33183"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33184"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33187"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33185"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33186"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-035643": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "52:d5:12:89:ea:40",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ad01995b183fdebead6c725e2b942ae8ce2d3964b3552789fe5b50ee7e7239a3",
	                    "EndpointID": "d429a1cd0f840d042af4ad7ea0bda6067a342be7fb552083411004a3604b0124",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-035643",
	                        "02b8c8e636a5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-035643 -n functional-035643
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-035643 -n functional-035643: exit status 2 (307.627259ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-035643 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p functional-035643 logs -n 25: (1.064504994s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-921447 image ls --format yaml --alsologtostderr                                                                                        │ functional-921447 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ image   │ functional-921447 image ls --format short --alsologtostderr                                                                                       │ functional-921447 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ image   │ functional-921447 image ls --format json --alsologtostderr                                                                                        │ functional-921447 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ ssh     │ functional-921447 ssh pgrep buildkitd                                                                                                             │ functional-921447 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │                     │
	│ image   │ functional-921447 image ls --format table --alsologtostderr                                                                                       │ functional-921447 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ image   │ functional-921447 image build -t localhost/my-image:functional-921447 testdata/build --alsologtostderr                                            │ functional-921447 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ image   │ functional-921447 image ls                                                                                                                        │ functional-921447 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ delete  │ -p functional-921447                                                                                                                              │ functional-921447 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ start   │ -p functional-035643 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │                     │
	│ start   │ -p functional-035643 --alsologtostderr -v=8                                                                                                       │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:29 UTC │                     │
	│ cache   │ functional-035643 cache add registry.k8s.io/pause:3.1                                                                                             │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ cache   │ functional-035643 cache add registry.k8s.io/pause:3.3                                                                                             │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ cache   │ functional-035643 cache add registry.k8s.io/pause:latest                                                                                          │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ cache   │ functional-035643 cache add minikube-local-cache-test:functional-035643                                                                           │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ cache   │ functional-035643 cache delete minikube-local-cache-test:functional-035643                                                                        │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ cache   │ list                                                                                                                                              │ minikube          │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ ssh     │ functional-035643 ssh sudo crictl images                                                                                                          │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ ssh     │ functional-035643 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ ssh     │ functional-035643 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │                     │
	│ cache   │ functional-035643 cache reload                                                                                                                    │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ ssh     │ functional-035643 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                               │ minikube          │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ kubectl │ functional-035643 kubectl -- --context functional-035643 get pods                                                                                 │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 00:29:34
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 00:29:34.833608  525066 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:29:34.833799  525066 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:29:34.833830  525066 out.go:374] Setting ErrFile to fd 2...
	I1212 00:29:34.833859  525066 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:29:34.834244  525066 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-487723/.minikube/bin
	I1212 00:29:34.834787  525066 out.go:368] Setting JSON to false
	I1212 00:29:34.835727  525066 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":11520,"bootTime":1765487855,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1212 00:29:34.836335  525066 start.go:143] virtualization:  
	I1212 00:29:34.841302  525066 out.go:179] * [functional-035643] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 00:29:34.846669  525066 out.go:179]   - MINIKUBE_LOCATION=22101
	I1212 00:29:34.846785  525066 notify.go:221] Checking for updates...
	I1212 00:29:34.852399  525066 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:29:34.855222  525066 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22101-487723/kubeconfig
	I1212 00:29:34.857924  525066 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-487723/.minikube
	I1212 00:29:34.860585  525066 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 00:29:34.863145  525066 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:29:34.866639  525066 config.go:182] Loaded profile config "functional-035643": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 00:29:34.866818  525066 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 00:29:34.892569  525066 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 00:29:34.892680  525066 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:29:34.954074  525066 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 00:29:34.944774098 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 00:29:34.954186  525066 docker.go:319] overlay module found
	I1212 00:29:34.958427  525066 out.go:179] * Using the docker driver based on existing profile
	I1212 00:29:34.960983  525066 start.go:309] selected driver: docker
	I1212 00:29:34.961005  525066 start.go:927] validating driver "docker" against &{Name:functional-035643 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-035643 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:29:34.961104  525066 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:29:34.961212  525066 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:29:35.019269  525066 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 00:29:35.008770771 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 00:29:35.019716  525066 cni.go:84] Creating CNI manager for ""
	I1212 00:29:35.019778  525066 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 00:29:35.019842  525066 start.go:353] cluster config:
	{Name:functional-035643 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-035643 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:29:35.022879  525066 out.go:179] * Starting "functional-035643" primary control-plane node in "functional-035643" cluster
	I1212 00:29:35.025659  525066 cache.go:134] Beginning downloading kic base image for docker with crio
	I1212 00:29:35.028463  525066 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1212 00:29:35.031434  525066 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1212 00:29:35.031495  525066 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22101-487723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1212 00:29:35.031510  525066 cache.go:65] Caching tarball of preloaded images
	I1212 00:29:35.031544  525066 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1212 00:29:35.031603  525066 preload.go:238] Found /home/jenkins/minikube-integration/22101-487723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1212 00:29:35.031614  525066 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1212 00:29:35.031729  525066 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/config.json ...
	I1212 00:29:35.051219  525066 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1212 00:29:35.051245  525066 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1212 00:29:35.051267  525066 cache.go:243] Successfully downloaded all kic artifacts
	I1212 00:29:35.051303  525066 start.go:360] acquireMachinesLock for functional-035643: {Name:mkb0cdc7d354412594dc63c0234fde00134e758d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 00:29:35.051387  525066 start.go:364] duration metric: took 54.908µs to acquireMachinesLock for "functional-035643"
	I1212 00:29:35.051416  525066 start.go:96] Skipping create...Using existing machine configuration
	I1212 00:29:35.051428  525066 fix.go:54] fixHost starting: 
	I1212 00:29:35.051696  525066 cli_runner.go:164] Run: docker container inspect functional-035643 --format={{.State.Status}}
	I1212 00:29:35.069320  525066 fix.go:112] recreateIfNeeded on functional-035643: state=Running err=<nil>
	W1212 00:29:35.069352  525066 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 00:29:35.072554  525066 out.go:252] * Updating the running docker "functional-035643" container ...
	I1212 00:29:35.072600  525066 machine.go:94] provisionDockerMachine start ...
	I1212 00:29:35.072693  525066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
	I1212 00:29:35.090330  525066 main.go:143] libmachine: Using SSH client type: native
	I1212 00:29:35.090669  525066 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1212 00:29:35.090706  525066 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 00:29:35.238363  525066 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-035643
	
	I1212 00:29:35.238387  525066 ubuntu.go:182] provisioning hostname "functional-035643"
	I1212 00:29:35.238453  525066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
	I1212 00:29:35.256201  525066 main.go:143] libmachine: Using SSH client type: native
	I1212 00:29:35.256511  525066 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1212 00:29:35.256528  525066 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-035643 && echo "functional-035643" | sudo tee /etc/hostname
	I1212 00:29:35.418094  525066 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-035643
	
	I1212 00:29:35.418176  525066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
	I1212 00:29:35.436164  525066 main.go:143] libmachine: Using SSH client type: native
	I1212 00:29:35.436475  525066 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1212 00:29:35.436494  525066 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-035643' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-035643/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-035643' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 00:29:35.594938  525066 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 00:29:35.594969  525066 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22101-487723/.minikube CaCertPath:/home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22101-487723/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22101-487723/.minikube}
	I1212 00:29:35.595009  525066 ubuntu.go:190] setting up certificates
	I1212 00:29:35.595026  525066 provision.go:84] configureAuth start
	I1212 00:29:35.595111  525066 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-035643
	I1212 00:29:35.612398  525066 provision.go:143] copyHostCerts
	I1212 00:29:35.612439  525066 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22101-487723/.minikube/ca.pem
	I1212 00:29:35.612482  525066 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-487723/.minikube/ca.pem, removing ...
	I1212 00:29:35.612494  525066 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-487723/.minikube/ca.pem
	I1212 00:29:35.612571  525066 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22101-487723/.minikube/ca.pem (1078 bytes)
	I1212 00:29:35.612671  525066 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22101-487723/.minikube/cert.pem
	I1212 00:29:35.612699  525066 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-487723/.minikube/cert.pem, removing ...
	I1212 00:29:35.612707  525066 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-487723/.minikube/cert.pem
	I1212 00:29:35.612734  525066 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22101-487723/.minikube/cert.pem (1123 bytes)
	I1212 00:29:35.612781  525066 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22101-487723/.minikube/key.pem
	I1212 00:29:35.612802  525066 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-487723/.minikube/key.pem, removing ...
	I1212 00:29:35.612813  525066 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-487723/.minikube/key.pem
	I1212 00:29:35.612837  525066 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22101-487723/.minikube/key.pem (1679 bytes)
	I1212 00:29:35.612889  525066 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22101-487723/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca-key.pem org=jenkins.functional-035643 san=[127.0.0.1 192.168.49.2 functional-035643 localhost minikube]
	I1212 00:29:35.977748  525066 provision.go:177] copyRemoteCerts
	I1212 00:29:35.977818  525066 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 00:29:35.977857  525066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
	I1212 00:29:35.995348  525066 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/functional-035643/id_rsa Username:docker}
	I1212 00:29:36.106772  525066 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1212 00:29:36.106859  525066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 00:29:36.126035  525066 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1212 00:29:36.126112  525066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 00:29:36.143996  525066 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1212 00:29:36.144114  525066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 00:29:36.161387  525066 provision.go:87] duration metric: took 566.343959ms to configureAuth
	I1212 00:29:36.161415  525066 ubuntu.go:206] setting minikube options for container-runtime
	I1212 00:29:36.161612  525066 config.go:182] Loaded profile config "functional-035643": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 00:29:36.161722  525066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
	I1212 00:29:36.179565  525066 main.go:143] libmachine: Using SSH client type: native
	I1212 00:29:36.179872  525066 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1212 00:29:36.179896  525066 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 00:29:36.525259  525066 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 00:29:36.525285  525066 machine.go:97] duration metric: took 1.45267532s to provisionDockerMachine
	I1212 00:29:36.525297  525066 start.go:293] postStartSetup for "functional-035643" (driver="docker")
	I1212 00:29:36.525310  525066 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 00:29:36.525385  525066 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 00:29:36.525432  525066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
	I1212 00:29:36.544323  525066 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/functional-035643/id_rsa Username:docker}
	I1212 00:29:36.650745  525066 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 00:29:36.654027  525066 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1212 00:29:36.654058  525066 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1212 00:29:36.654063  525066 command_runner.go:130] > VERSION_ID="12"
	I1212 00:29:36.654067  525066 command_runner.go:130] > VERSION="12 (bookworm)"
	I1212 00:29:36.654072  525066 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1212 00:29:36.654076  525066 command_runner.go:130] > ID=debian
	I1212 00:29:36.654081  525066 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1212 00:29:36.654086  525066 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1212 00:29:36.654098  525066 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1212 00:29:36.654164  525066 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 00:29:36.654184  525066 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 00:29:36.654203  525066 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-487723/.minikube/addons for local assets ...
	I1212 00:29:36.654261  525066 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-487723/.minikube/files for local assets ...
	I1212 00:29:36.654368  525066 filesync.go:149] local asset: /home/jenkins/minikube-integration/22101-487723/.minikube/files/etc/ssl/certs/4909542.pem -> 4909542.pem in /etc/ssl/certs
	I1212 00:29:36.654379  525066 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/files/etc/ssl/certs/4909542.pem -> /etc/ssl/certs/4909542.pem
	I1212 00:29:36.654462  525066 filesync.go:149] local asset: /home/jenkins/minikube-integration/22101-487723/.minikube/files/etc/test/nested/copy/490954/hosts -> hosts in /etc/test/nested/copy/490954
	I1212 00:29:36.654470  525066 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/files/etc/test/nested/copy/490954/hosts -> /etc/test/nested/copy/490954/hosts
	I1212 00:29:36.654523  525066 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/490954
	I1212 00:29:36.661942  525066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/files/etc/ssl/certs/4909542.pem --> /etc/ssl/certs/4909542.pem (1708 bytes)
	I1212 00:29:36.678936  525066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/files/etc/test/nested/copy/490954/hosts --> /etc/test/nested/copy/490954/hosts (40 bytes)
	I1212 00:29:36.696209  525066 start.go:296] duration metric: took 170.896684ms for postStartSetup
	I1212 00:29:36.696330  525066 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 00:29:36.696401  525066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
	I1212 00:29:36.716202  525066 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/functional-035643/id_rsa Username:docker}
	I1212 00:29:36.819154  525066 command_runner.go:130] > 18%
	I1212 00:29:36.819742  525066 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 00:29:36.823869  525066 command_runner.go:130] > 160G
	I1212 00:29:36.824320  525066 fix.go:56] duration metric: took 1.772888094s for fixHost
	I1212 00:29:36.824342  525066 start.go:83] releasing machines lock for "functional-035643", held for 1.772938226s
	I1212 00:29:36.824419  525066 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-035643
	I1212 00:29:36.841414  525066 ssh_runner.go:195] Run: cat /version.json
	I1212 00:29:36.841444  525066 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 00:29:36.841465  525066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
	I1212 00:29:36.841499  525066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
	I1212 00:29:36.858975  525066 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/functional-035643/id_rsa Username:docker}
	I1212 00:29:36.864277  525066 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/functional-035643/id_rsa Username:docker}
	I1212 00:29:37.063000  525066 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1212 00:29:37.063067  525066 command_runner.go:130] > {"iso_version": "v1.37.0-1765151505-21409", "kicbase_version": "v0.0.48-1765275396-22083", "minikube_version": "v1.37.0", "commit": "9f3959633d311997d75aab86f8ff840f224c6486"}
	I1212 00:29:37.063223  525066 ssh_runner.go:195] Run: systemctl --version
	I1212 00:29:37.069375  525066 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1212 00:29:37.069421  525066 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1212 00:29:37.069789  525066 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 00:29:37.107153  525066 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1212 00:29:37.111099  525066 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1212 00:29:37.111476  525066 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 00:29:37.111538  525066 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 00:29:37.119321  525066 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 00:29:37.119346  525066 start.go:496] detecting cgroup driver to use...
	I1212 00:29:37.119377  525066 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 00:29:37.119429  525066 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 00:29:37.134288  525066 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 00:29:37.147114  525066 docker.go:218] disabling cri-docker service (if available) ...
	I1212 00:29:37.147210  525066 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 00:29:37.162260  525066 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 00:29:37.175226  525066 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 00:29:37.287755  525066 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 00:29:37.404746  525066 docker.go:234] disabling docker service ...
	I1212 00:29:37.404828  525066 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 00:29:37.419834  525066 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 00:29:37.433027  525066 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 00:29:37.553874  525066 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 00:29:37.677379  525066 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 00:29:37.696856  525066 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 00:29:37.711415  525066 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1212 00:29:37.712568  525066 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1212 00:29:37.712642  525066 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:29:37.724126  525066 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 00:29:37.724197  525066 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:29:37.733568  525066 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:29:37.743368  525066 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:29:37.752442  525066 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 00:29:37.761570  525066 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:29:37.771444  525066 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:29:37.780014  525066 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:29:37.788901  525066 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 00:29:37.795786  525066 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1212 00:29:37.796743  525066 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 00:29:37.804315  525066 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:29:37.916494  525066 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 00:29:38.098236  525066 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 00:29:38.098362  525066 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 00:29:38.102398  525066 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1212 00:29:38.102430  525066 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1212 00:29:38.102438  525066 command_runner.go:130] > Device: 0,72	Inode: 1642        Links: 1
	I1212 00:29:38.102445  525066 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 00:29:38.102451  525066 command_runner.go:130] > Access: 2025-12-12 00:29:38.034542795 +0000
	I1212 00:29:38.102458  525066 command_runner.go:130] > Modify: 2025-12-12 00:29:38.034542795 +0000
	I1212 00:29:38.102463  525066 command_runner.go:130] > Change: 2025-12-12 00:29:38.034542795 +0000
	I1212 00:29:38.102467  525066 command_runner.go:130] >  Birth: -
	I1212 00:29:38.102500  525066 start.go:564] Will wait 60s for crictl version
	I1212 00:29:38.102554  525066 ssh_runner.go:195] Run: which crictl
	I1212 00:29:38.105961  525066 command_runner.go:130] > /usr/local/bin/crictl
	I1212 00:29:38.106209  525066 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 00:29:38.130147  525066 command_runner.go:130] > Version:  0.1.0
	I1212 00:29:38.130215  525066 command_runner.go:130] > RuntimeName:  cri-o
	I1212 00:29:38.130236  525066 command_runner.go:130] > RuntimeVersion:  1.34.3
	I1212 00:29:38.130255  525066 command_runner.go:130] > RuntimeApiVersion:  v1
	I1212 00:29:38.130299  525066 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1212 00:29:38.130400  525066 ssh_runner.go:195] Run: crio --version
	I1212 00:29:38.156955  525066 command_runner.go:130] > crio version 1.34.3
	I1212 00:29:38.157026  525066 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1212 00:29:38.157055  525066 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1212 00:29:38.157075  525066 command_runner.go:130] >    GitTreeState:   dirty
	I1212 00:29:38.157101  525066 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1212 00:29:38.157125  525066 command_runner.go:130] >    GoVersion:      go1.24.6
	I1212 00:29:38.157142  525066 command_runner.go:130] >    Compiler:       gc
	I1212 00:29:38.157162  525066 command_runner.go:130] >    Platform:       linux/arm64
	I1212 00:29:38.157188  525066 command_runner.go:130] >    Linkmode:       static
	I1212 00:29:38.157205  525066 command_runner.go:130] >    BuildTags:
	I1212 00:29:38.157231  525066 command_runner.go:130] >      static
	I1212 00:29:38.157260  525066 command_runner.go:130] >      netgo
	I1212 00:29:38.157278  525066 command_runner.go:130] >      osusergo
	I1212 00:29:38.157296  525066 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1212 00:29:38.157309  525066 command_runner.go:130] >      seccomp
	I1212 00:29:38.157334  525066 command_runner.go:130] >      apparmor
	I1212 00:29:38.157350  525066 command_runner.go:130] >      selinux
	I1212 00:29:38.157366  525066 command_runner.go:130] >    LDFlags:          unknown
	I1212 00:29:38.157384  525066 command_runner.go:130] >    SeccompEnabled:   true
	I1212 00:29:38.157415  525066 command_runner.go:130] >    AppArmorEnabled:  false
	I1212 00:29:38.159818  525066 ssh_runner.go:195] Run: crio --version
	I1212 00:29:38.187365  525066 command_runner.go:130] > crio version 1.34.3
	I1212 00:29:38.187391  525066 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1212 00:29:38.187398  525066 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1212 00:29:38.187403  525066 command_runner.go:130] >    GitTreeState:   dirty
	I1212 00:29:38.187408  525066 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1212 00:29:38.187414  525066 command_runner.go:130] >    GoVersion:      go1.24.6
	I1212 00:29:38.187418  525066 command_runner.go:130] >    Compiler:       gc
	I1212 00:29:38.187438  525066 command_runner.go:130] >    Platform:       linux/arm64
	I1212 00:29:38.187447  525066 command_runner.go:130] >    Linkmode:       static
	I1212 00:29:38.187451  525066 command_runner.go:130] >    BuildTags:
	I1212 00:29:38.187455  525066 command_runner.go:130] >      static
	I1212 00:29:38.187459  525066 command_runner.go:130] >      netgo
	I1212 00:29:38.187463  525066 command_runner.go:130] >      osusergo
	I1212 00:29:38.187468  525066 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1212 00:29:38.187481  525066 command_runner.go:130] >      seccomp
	I1212 00:29:38.187489  525066 command_runner.go:130] >      apparmor
	I1212 00:29:38.187494  525066 command_runner.go:130] >      selinux
	I1212 00:29:38.187502  525066 command_runner.go:130] >    LDFlags:          unknown
	I1212 00:29:38.187507  525066 command_runner.go:130] >    SeccompEnabled:   true
	I1212 00:29:38.187511  525066 command_runner.go:130] >    AppArmorEnabled:  false
	I1212 00:29:38.193058  525066 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1212 00:29:38.195137  525066 cli_runner.go:164] Run: docker network inspect functional-035643 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 00:29:38.211553  525066 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1212 00:29:38.215227  525066 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1212 00:29:38.215507  525066 kubeadm.go:884] updating cluster {Name:functional-035643 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-035643 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 00:29:38.215633  525066 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1212 00:29:38.215688  525066 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:29:38.248801  525066 command_runner.go:130] > {
	I1212 00:29:38.248822  525066 command_runner.go:130] >   "images":  [
	I1212 00:29:38.248827  525066 command_runner.go:130] >     {
	I1212 00:29:38.248837  525066 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1212 00:29:38.248842  525066 command_runner.go:130] >       "repoTags":  [
	I1212 00:29:38.248851  525066 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1212 00:29:38.248855  525066 command_runner.go:130] >       ],
	I1212 00:29:38.248859  525066 command_runner.go:130] >       "repoDigests":  [
	I1212 00:29:38.248869  525066 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1212 00:29:38.248877  525066 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1212 00:29:38.248880  525066 command_runner.go:130] >       ],
	I1212 00:29:38.248885  525066 command_runner.go:130] >       "size":  "111333938",
	I1212 00:29:38.248893  525066 command_runner.go:130] >       "username":  "",
	I1212 00:29:38.248898  525066 command_runner.go:130] >       "pinned":  false
	I1212 00:29:38.248901  525066 command_runner.go:130] >     },
	I1212 00:29:38.248905  525066 command_runner.go:130] >     {
	I1212 00:29:38.248911  525066 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1212 00:29:38.248926  525066 command_runner.go:130] >       "repoTags":  [
	I1212 00:29:38.248931  525066 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1212 00:29:38.248935  525066 command_runner.go:130] >       ],
	I1212 00:29:38.248939  525066 command_runner.go:130] >       "repoDigests":  [
	I1212 00:29:38.248951  525066 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1212 00:29:38.248960  525066 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1212 00:29:38.248967  525066 command_runner.go:130] >       ],
	I1212 00:29:38.248971  525066 command_runner.go:130] >       "size":  "29037500",
	I1212 00:29:38.248975  525066 command_runner.go:130] >       "username":  "",
	I1212 00:29:38.248983  525066 command_runner.go:130] >       "pinned":  false
	I1212 00:29:38.248987  525066 command_runner.go:130] >     },
	I1212 00:29:38.248990  525066 command_runner.go:130] >     {
	I1212 00:29:38.248998  525066 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1212 00:29:38.249004  525066 command_runner.go:130] >       "repoTags":  [
	I1212 00:29:38.249018  525066 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1212 00:29:38.249026  525066 command_runner.go:130] >       ],
	I1212 00:29:38.249036  525066 command_runner.go:130] >       "repoDigests":  [
	I1212 00:29:38.249044  525066 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1212 00:29:38.249058  525066 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1212 00:29:38.249061  525066 command_runner.go:130] >       ],
	I1212 00:29:38.249065  525066 command_runner.go:130] >       "size":  "74491780",
	I1212 00:29:38.249070  525066 command_runner.go:130] >       "username":  "nonroot",
	I1212 00:29:38.249073  525066 command_runner.go:130] >       "pinned":  false
	I1212 00:29:38.249080  525066 command_runner.go:130] >     },
	I1212 00:29:38.249083  525066 command_runner.go:130] >     {
	I1212 00:29:38.249093  525066 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1212 00:29:38.249104  525066 command_runner.go:130] >       "repoTags":  [
	I1212 00:29:38.249109  525066 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1212 00:29:38.249112  525066 command_runner.go:130] >       ],
	I1212 00:29:38.249116  525066 command_runner.go:130] >       "repoDigests":  [
	I1212 00:29:38.249125  525066 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1212 00:29:38.249135  525066 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1212 00:29:38.249139  525066 command_runner.go:130] >       ],
	I1212 00:29:38.249142  525066 command_runner.go:130] >       "size":  "60857170",
	I1212 00:29:38.249146  525066 command_runner.go:130] >       "uid":  {
	I1212 00:29:38.249150  525066 command_runner.go:130] >         "value":  "0"
	I1212 00:29:38.249153  525066 command_runner.go:130] >       },
	I1212 00:29:38.249166  525066 command_runner.go:130] >       "username":  "",
	I1212 00:29:38.249173  525066 command_runner.go:130] >       "pinned":  false
	I1212 00:29:38.249177  525066 command_runner.go:130] >     },
	I1212 00:29:38.249179  525066 command_runner.go:130] >     {
	I1212 00:29:38.249186  525066 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1212 00:29:38.249192  525066 command_runner.go:130] >       "repoTags":  [
	I1212 00:29:38.249197  525066 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1212 00:29:38.249201  525066 command_runner.go:130] >       ],
	I1212 00:29:38.249205  525066 command_runner.go:130] >       "repoDigests":  [
	I1212 00:29:38.249215  525066 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1212 00:29:38.249230  525066 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1212 00:29:38.249234  525066 command_runner.go:130] >       ],
	I1212 00:29:38.249241  525066 command_runner.go:130] >       "size":  "84949999",
	I1212 00:29:38.249245  525066 command_runner.go:130] >       "uid":  {
	I1212 00:29:38.249249  525066 command_runner.go:130] >         "value":  "0"
	I1212 00:29:38.249254  525066 command_runner.go:130] >       },
	I1212 00:29:38.249259  525066 command_runner.go:130] >       "username":  "",
	I1212 00:29:38.249263  525066 command_runner.go:130] >       "pinned":  false
	I1212 00:29:38.249268  525066 command_runner.go:130] >     },
	I1212 00:29:38.249272  525066 command_runner.go:130] >     {
	I1212 00:29:38.249281  525066 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1212 00:29:38.249294  525066 command_runner.go:130] >       "repoTags":  [
	I1212 00:29:38.249301  525066 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1212 00:29:38.249304  525066 command_runner.go:130] >       ],
	I1212 00:29:38.249308  525066 command_runner.go:130] >       "repoDigests":  [
	I1212 00:29:38.249317  525066 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1212 00:29:38.249326  525066 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1212 00:29:38.249337  525066 command_runner.go:130] >       ],
	I1212 00:29:38.249341  525066 command_runner.go:130] >       "size":  "72170325",
	I1212 00:29:38.249345  525066 command_runner.go:130] >       "uid":  {
	I1212 00:29:38.249348  525066 command_runner.go:130] >         "value":  "0"
	I1212 00:29:38.249356  525066 command_runner.go:130] >       },
	I1212 00:29:38.249364  525066 command_runner.go:130] >       "username":  "",
	I1212 00:29:38.249367  525066 command_runner.go:130] >       "pinned":  false
	I1212 00:29:38.249371  525066 command_runner.go:130] >     },
	I1212 00:29:38.249374  525066 command_runner.go:130] >     {
	I1212 00:29:38.249381  525066 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1212 00:29:38.249386  525066 command_runner.go:130] >       "repoTags":  [
	I1212 00:29:38.249391  525066 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1212 00:29:38.249394  525066 command_runner.go:130] >       ],
	I1212 00:29:38.249398  525066 command_runner.go:130] >       "repoDigests":  [
	I1212 00:29:38.249409  525066 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1212 00:29:38.249426  525066 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1212 00:29:38.249434  525066 command_runner.go:130] >       ],
	I1212 00:29:38.249438  525066 command_runner.go:130] >       "size":  "74106775",
	I1212 00:29:38.249450  525066 command_runner.go:130] >       "username":  "",
	I1212 00:29:38.249454  525066 command_runner.go:130] >       "pinned":  false
	I1212 00:29:38.249458  525066 command_runner.go:130] >     },
	I1212 00:29:38.249461  525066 command_runner.go:130] >     {
	I1212 00:29:38.249468  525066 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1212 00:29:38.249472  525066 command_runner.go:130] >       "repoTags":  [
	I1212 00:29:38.249481  525066 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1212 00:29:38.249484  525066 command_runner.go:130] >       ],
	I1212 00:29:38.249488  525066 command_runner.go:130] >       "repoDigests":  [
	I1212 00:29:38.249502  525066 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1212 00:29:38.249522  525066 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1212 00:29:38.249528  525066 command_runner.go:130] >       ],
	I1212 00:29:38.249532  525066 command_runner.go:130] >       "size":  "49822549",
	I1212 00:29:38.249535  525066 command_runner.go:130] >       "uid":  {
	I1212 00:29:38.249539  525066 command_runner.go:130] >         "value":  "0"
	I1212 00:29:38.249549  525066 command_runner.go:130] >       },
	I1212 00:29:38.249553  525066 command_runner.go:130] >       "username":  "",
	I1212 00:29:38.249556  525066 command_runner.go:130] >       "pinned":  false
	I1212 00:29:38.249559  525066 command_runner.go:130] >     },
	I1212 00:29:38.249562  525066 command_runner.go:130] >     {
	I1212 00:29:38.249568  525066 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1212 00:29:38.249572  525066 command_runner.go:130] >       "repoTags":  [
	I1212 00:29:38.249576  525066 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1212 00:29:38.249581  525066 command_runner.go:130] >       ],
	I1212 00:29:38.249586  525066 command_runner.go:130] >       "repoDigests":  [
	I1212 00:29:38.249598  525066 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1212 00:29:38.249606  525066 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1212 00:29:38.249613  525066 command_runner.go:130] >       ],
	I1212 00:29:38.249617  525066 command_runner.go:130] >       "size":  "519884",
	I1212 00:29:38.249621  525066 command_runner.go:130] >       "uid":  {
	I1212 00:29:38.249626  525066 command_runner.go:130] >         "value":  "65535"
	I1212 00:29:38.249633  525066 command_runner.go:130] >       },
	I1212 00:29:38.249642  525066 command_runner.go:130] >       "username":  "",
	I1212 00:29:38.249646  525066 command_runner.go:130] >       "pinned":  true
	I1212 00:29:38.249649  525066 command_runner.go:130] >     }
	I1212 00:29:38.249653  525066 command_runner.go:130] >   ]
	I1212 00:29:38.249656  525066 command_runner.go:130] > }
	I1212 00:29:38.252138  525066 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 00:29:38.252165  525066 crio.go:433] Images already preloaded, skipping extraction
	I1212 00:29:38.252226  525066 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:29:38.276626  525066 command_runner.go:130] > {
	I1212 00:29:38.276647  525066 command_runner.go:130] >   "images":  [
	I1212 00:29:38.276651  525066 command_runner.go:130] >     {
	I1212 00:29:38.276660  525066 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1212 00:29:38.276674  525066 command_runner.go:130] >       "repoTags":  [
	I1212 00:29:38.276681  525066 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1212 00:29:38.276684  525066 command_runner.go:130] >       ],
	I1212 00:29:38.276690  525066 command_runner.go:130] >       "repoDigests":  [
	I1212 00:29:38.276700  525066 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1212 00:29:38.276711  525066 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1212 00:29:38.276717  525066 command_runner.go:130] >       ],
	I1212 00:29:38.276721  525066 command_runner.go:130] >       "size":  "111333938",
	I1212 00:29:38.276725  525066 command_runner.go:130] >       "username":  "",
	I1212 00:29:38.276731  525066 command_runner.go:130] >       "pinned":  false
	I1212 00:29:38.276737  525066 command_runner.go:130] >     },
	I1212 00:29:38.276740  525066 command_runner.go:130] >     {
	I1212 00:29:38.276747  525066 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1212 00:29:38.276754  525066 command_runner.go:130] >       "repoTags":  [
	I1212 00:29:38.276760  525066 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1212 00:29:38.276767  525066 command_runner.go:130] >       ],
	I1212 00:29:38.276771  525066 command_runner.go:130] >       "repoDigests":  [
	I1212 00:29:38.276781  525066 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1212 00:29:38.276790  525066 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1212 00:29:38.276794  525066 command_runner.go:130] >       ],
	I1212 00:29:38.276799  525066 command_runner.go:130] >       "size":  "29037500",
	I1212 00:29:38.276807  525066 command_runner.go:130] >       "username":  "",
	I1212 00:29:38.276815  525066 command_runner.go:130] >       "pinned":  false
	I1212 00:29:38.276822  525066 command_runner.go:130] >     },
	I1212 00:29:38.276826  525066 command_runner.go:130] >     {
	I1212 00:29:38.276833  525066 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1212 00:29:38.276839  525066 command_runner.go:130] >       "repoTags":  [
	I1212 00:29:38.276845  525066 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1212 00:29:38.276850  525066 command_runner.go:130] >       ],
	I1212 00:29:38.276854  525066 command_runner.go:130] >       "repoDigests":  [
	I1212 00:29:38.276868  525066 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1212 00:29:38.276876  525066 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1212 00:29:38.276879  525066 command_runner.go:130] >       ],
	I1212 00:29:38.276883  525066 command_runner.go:130] >       "size":  "74491780",
	I1212 00:29:38.276891  525066 command_runner.go:130] >       "username":  "nonroot",
	I1212 00:29:38.276895  525066 command_runner.go:130] >       "pinned":  false
	I1212 00:29:38.276901  525066 command_runner.go:130] >     },
	I1212 00:29:38.276904  525066 command_runner.go:130] >     {
	I1212 00:29:38.276911  525066 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1212 00:29:38.276918  525066 command_runner.go:130] >       "repoTags":  [
	I1212 00:29:38.276922  525066 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1212 00:29:38.276925  525066 command_runner.go:130] >       ],
	I1212 00:29:38.276930  525066 command_runner.go:130] >       "repoDigests":  [
	I1212 00:29:38.276940  525066 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1212 00:29:38.276951  525066 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1212 00:29:38.276954  525066 command_runner.go:130] >       ],
	I1212 00:29:38.276973  525066 command_runner.go:130] >       "size":  "60857170",
	I1212 00:29:38.276977  525066 command_runner.go:130] >       "uid":  {
	I1212 00:29:38.276980  525066 command_runner.go:130] >         "value":  "0"
	I1212 00:29:38.276983  525066 command_runner.go:130] >       },
	I1212 00:29:38.276994  525066 command_runner.go:130] >       "username":  "",
	I1212 00:29:38.277001  525066 command_runner.go:130] >       "pinned":  false
	I1212 00:29:38.277004  525066 command_runner.go:130] >     },
	I1212 00:29:38.277007  525066 command_runner.go:130] >     {
	I1212 00:29:38.277014  525066 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1212 00:29:38.277019  525066 command_runner.go:130] >       "repoTags":  [
	I1212 00:29:38.277032  525066 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1212 00:29:38.277039  525066 command_runner.go:130] >       ],
	I1212 00:29:38.277043  525066 command_runner.go:130] >       "repoDigests":  [
	I1212 00:29:38.277051  525066 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1212 00:29:38.277066  525066 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1212 00:29:38.277070  525066 command_runner.go:130] >       ],
	I1212 00:29:38.277074  525066 command_runner.go:130] >       "size":  "84949999",
	I1212 00:29:38.277078  525066 command_runner.go:130] >       "uid":  {
	I1212 00:29:38.277086  525066 command_runner.go:130] >         "value":  "0"
	I1212 00:29:38.277089  525066 command_runner.go:130] >       },
	I1212 00:29:38.277093  525066 command_runner.go:130] >       "username":  "",
	I1212 00:29:38.277101  525066 command_runner.go:130] >       "pinned":  false
	I1212 00:29:38.277104  525066 command_runner.go:130] >     },
	I1212 00:29:38.277110  525066 command_runner.go:130] >     {
	I1212 00:29:38.277117  525066 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1212 00:29:38.277123  525066 command_runner.go:130] >       "repoTags":  [
	I1212 00:29:38.277129  525066 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1212 00:29:38.277132  525066 command_runner.go:130] >       ],
	I1212 00:29:38.277136  525066 command_runner.go:130] >       "repoDigests":  [
	I1212 00:29:38.277145  525066 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1212 00:29:38.277157  525066 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1212 00:29:38.277160  525066 command_runner.go:130] >       ],
	I1212 00:29:38.277164  525066 command_runner.go:130] >       "size":  "72170325",
	I1212 00:29:38.277167  525066 command_runner.go:130] >       "uid":  {
	I1212 00:29:38.277171  525066 command_runner.go:130] >         "value":  "0"
	I1212 00:29:38.277175  525066 command_runner.go:130] >       },
	I1212 00:29:38.277181  525066 command_runner.go:130] >       "username":  "",
	I1212 00:29:38.277186  525066 command_runner.go:130] >       "pinned":  false
	I1212 00:29:38.277191  525066 command_runner.go:130] >     },
	I1212 00:29:38.277194  525066 command_runner.go:130] >     {
	I1212 00:29:38.277203  525066 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1212 00:29:38.277209  525066 command_runner.go:130] >       "repoTags":  [
	I1212 00:29:38.277215  525066 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1212 00:29:38.277225  525066 command_runner.go:130] >       ],
	I1212 00:29:38.277229  525066 command_runner.go:130] >       "repoDigests":  [
	I1212 00:29:38.277238  525066 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1212 00:29:38.277251  525066 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1212 00:29:38.277255  525066 command_runner.go:130] >       ],
	I1212 00:29:38.277259  525066 command_runner.go:130] >       "size":  "74106775",
	I1212 00:29:38.277263  525066 command_runner.go:130] >       "username":  "",
	I1212 00:29:38.277269  525066 command_runner.go:130] >       "pinned":  false
	I1212 00:29:38.277273  525066 command_runner.go:130] >     },
	I1212 00:29:38.277276  525066 command_runner.go:130] >     {
	I1212 00:29:38.277283  525066 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1212 00:29:38.277289  525066 command_runner.go:130] >       "repoTags":  [
	I1212 00:29:38.277294  525066 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1212 00:29:38.277297  525066 command_runner.go:130] >       ],
	I1212 00:29:38.277301  525066 command_runner.go:130] >       "repoDigests":  [
	I1212 00:29:38.277309  525066 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1212 00:29:38.277326  525066 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1212 00:29:38.277330  525066 command_runner.go:130] >       ],
	I1212 00:29:38.277334  525066 command_runner.go:130] >       "size":  "49822549",
	I1212 00:29:38.277340  525066 command_runner.go:130] >       "uid":  {
	I1212 00:29:38.277344  525066 command_runner.go:130] >         "value":  "0"
	I1212 00:29:38.277347  525066 command_runner.go:130] >       },
	I1212 00:29:38.277351  525066 command_runner.go:130] >       "username":  "",
	I1212 00:29:38.277357  525066 command_runner.go:130] >       "pinned":  false
	I1212 00:29:38.277360  525066 command_runner.go:130] >     },
	I1212 00:29:38.277364  525066 command_runner.go:130] >     {
	I1212 00:29:38.277373  525066 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1212 00:29:38.277377  525066 command_runner.go:130] >       "repoTags":  [
	I1212 00:29:38.277390  525066 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1212 00:29:38.277394  525066 command_runner.go:130] >       ],
	I1212 00:29:38.277397  525066 command_runner.go:130] >       "repoDigests":  [
	I1212 00:29:38.277405  525066 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1212 00:29:38.277416  525066 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1212 00:29:38.277424  525066 command_runner.go:130] >       ],
	I1212 00:29:38.277429  525066 command_runner.go:130] >       "size":  "519884",
	I1212 00:29:38.277432  525066 command_runner.go:130] >       "uid":  {
	I1212 00:29:38.277438  525066 command_runner.go:130] >         "value":  "65535"
	I1212 00:29:38.277442  525066 command_runner.go:130] >       },
	I1212 00:29:38.277447  525066 command_runner.go:130] >       "username":  "",
	I1212 00:29:38.277453  525066 command_runner.go:130] >       "pinned":  true
	I1212 00:29:38.277456  525066 command_runner.go:130] >     }
	I1212 00:29:38.277459  525066 command_runner.go:130] >   ]
	I1212 00:29:38.277464  525066 command_runner.go:130] > }
	I1212 00:29:38.282583  525066 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 00:29:38.282606  525066 cache_images.go:86] Images are preloaded, skipping loading
	I1212 00:29:38.282613  525066 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1212 00:29:38.282744  525066 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-035643 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-035643 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 00:29:38.282831  525066 ssh_runner.go:195] Run: crio config
	I1212 00:29:38.339065  525066 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1212 00:29:38.339140  525066 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1212 00:29:38.339162  525066 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1212 00:29:38.339180  525066 command_runner.go:130] > #
	I1212 00:29:38.339218  525066 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1212 00:29:38.339243  525066 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1212 00:29:38.339261  525066 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1212 00:29:38.339304  525066 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1212 00:29:38.339327  525066 command_runner.go:130] > # reload'.
	I1212 00:29:38.339346  525066 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1212 00:29:38.339379  525066 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1212 00:29:38.339402  525066 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1212 00:29:38.339422  525066 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1212 00:29:38.339436  525066 command_runner.go:130] > [crio]
	I1212 00:29:38.339466  525066 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1212 00:29:38.339488  525066 command_runner.go:130] > # containers images, in this directory.
	I1212 00:29:38.339510  525066 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1212 00:29:38.339541  525066 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1212 00:29:38.339562  525066 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1212 00:29:38.339583  525066 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1212 00:29:38.339600  525066 command_runner.go:130] > # imagestore = ""
	I1212 00:29:38.339629  525066 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1212 00:29:38.339652  525066 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1212 00:29:38.339676  525066 command_runner.go:130] > # storage_driver = "overlay"
	I1212 00:29:38.339707  525066 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1212 00:29:38.339730  525066 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1212 00:29:38.339746  525066 command_runner.go:130] > # storage_option = [
	I1212 00:29:38.339762  525066 command_runner.go:130] > # ]
	I1212 00:29:38.339794  525066 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1212 00:29:38.339818  525066 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1212 00:29:38.339834  525066 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1212 00:29:38.339852  525066 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1212 00:29:38.339890  525066 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1212 00:29:38.339907  525066 command_runner.go:130] > # always happen on a node reboot
	I1212 00:29:38.339923  525066 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1212 00:29:38.339959  525066 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1212 00:29:38.339984  525066 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1212 00:29:38.340001  525066 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1212 00:29:38.340029  525066 command_runner.go:130] > # version_file_persist = ""
	I1212 00:29:38.340052  525066 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1212 00:29:38.340072  525066 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1212 00:29:38.340087  525066 command_runner.go:130] > # internal_wipe = true
	I1212 00:29:38.340117  525066 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1212 00:29:38.340140  525066 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1212 00:29:38.340157  525066 command_runner.go:130] > # internal_repair = true
	I1212 00:29:38.340175  525066 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1212 00:29:38.340208  525066 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1212 00:29:38.340228  525066 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1212 00:29:38.340246  525066 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1212 00:29:38.340277  525066 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1212 00:29:38.340300  525066 command_runner.go:130] > [crio.api]
	I1212 00:29:38.340319  525066 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1212 00:29:38.340336  525066 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1212 00:29:38.340365  525066 command_runner.go:130] > # IP address on which the stream server will listen.
	I1212 00:29:38.340387  525066 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1212 00:29:38.340407  525066 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1212 00:29:38.340447  525066 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1212 00:29:38.340822  525066 command_runner.go:130] > # stream_port = "0"
	I1212 00:29:38.340835  525066 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1212 00:29:38.341007  525066 command_runner.go:130] > # stream_enable_tls = false
	I1212 00:29:38.341018  525066 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1212 00:29:38.341210  525066 command_runner.go:130] > # stream_idle_timeout = ""
	I1212 00:29:38.341221  525066 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1212 00:29:38.341229  525066 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1212 00:29:38.341233  525066 command_runner.go:130] > # stream_tls_cert = ""
	I1212 00:29:38.341239  525066 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1212 00:29:38.341245  525066 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1212 00:29:38.341249  525066 command_runner.go:130] > # stream_tls_key = ""
	I1212 00:29:38.341255  525066 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1212 00:29:38.341261  525066 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1212 00:29:38.341272  525066 command_runner.go:130] > # automatically pick up the changes.
	I1212 00:29:38.341446  525066 command_runner.go:130] > # stream_tls_ca = ""
	I1212 00:29:38.341475  525066 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1212 00:29:38.341751  525066 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1212 00:29:38.341765  525066 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1212 00:29:38.341770  525066 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1212 00:29:38.341777  525066 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1212 00:29:38.341782  525066 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1212 00:29:38.341786  525066 command_runner.go:130] > [crio.runtime]
	I1212 00:29:38.341792  525066 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1212 00:29:38.341798  525066 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1212 00:29:38.341801  525066 command_runner.go:130] > # "nofile=1024:2048"
	I1212 00:29:38.341807  525066 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1212 00:29:38.341811  525066 command_runner.go:130] > # default_ulimits = [
	I1212 00:29:38.341814  525066 command_runner.go:130] > # ]
	I1212 00:29:38.341821  525066 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1212 00:29:38.341824  525066 command_runner.go:130] > # no_pivot = false
	I1212 00:29:38.341830  525066 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1212 00:29:38.341836  525066 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1212 00:29:38.341841  525066 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1212 00:29:38.341847  525066 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1212 00:29:38.341851  525066 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1212 00:29:38.341858  525066 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1212 00:29:38.342059  525066 command_runner.go:130] > # conmon = ""
	I1212 00:29:38.342069  525066 command_runner.go:130] > # Cgroup setting for conmon
	I1212 00:29:38.342077  525066 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1212 00:29:38.342081  525066 command_runner.go:130] > conmon_cgroup = "pod"
	I1212 00:29:38.342087  525066 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1212 00:29:38.342093  525066 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1212 00:29:38.342100  525066 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1212 00:29:38.342293  525066 command_runner.go:130] > # conmon_env = [
	I1212 00:29:38.342301  525066 command_runner.go:130] > # ]
	I1212 00:29:38.342307  525066 command_runner.go:130] > # Additional environment variables to set for all the
	I1212 00:29:38.342312  525066 command_runner.go:130] > # containers. These are overridden if set in the
	I1212 00:29:38.342318  525066 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1212 00:29:38.342321  525066 command_runner.go:130] > # default_env = [
	I1212 00:29:38.342325  525066 command_runner.go:130] > # ]
	I1212 00:29:38.342330  525066 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1212 00:29:38.342338  525066 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1212 00:29:38.342531  525066 command_runner.go:130] > # selinux = false
	I1212 00:29:38.342542  525066 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1212 00:29:38.342551  525066 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1212 00:29:38.342556  525066 command_runner.go:130] > # This option supports live configuration reload.
	I1212 00:29:38.342765  525066 command_runner.go:130] > # seccomp_profile = ""
	I1212 00:29:38.342777  525066 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1212 00:29:38.342783  525066 command_runner.go:130] > # This option supports live configuration reload.
	I1212 00:29:38.342787  525066 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1212 00:29:38.342804  525066 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1212 00:29:38.342810  525066 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1212 00:29:38.342817  525066 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1212 00:29:38.342823  525066 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1212 00:29:38.342828  525066 command_runner.go:130] > # This option supports live configuration reload.
	I1212 00:29:38.342833  525066 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1212 00:29:38.342838  525066 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1212 00:29:38.342842  525066 command_runner.go:130] > # the cgroup blockio controller.
	I1212 00:29:38.343029  525066 command_runner.go:130] > # blockio_config_file = ""
	I1212 00:29:38.343040  525066 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1212 00:29:38.343044  525066 command_runner.go:130] > # blockio parameters.
	I1212 00:29:38.343244  525066 command_runner.go:130] > # blockio_reload = false
	I1212 00:29:38.343255  525066 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1212 00:29:38.343260  525066 command_runner.go:130] > # irqbalance daemon.
	I1212 00:29:38.343265  525066 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1212 00:29:38.343271  525066 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1212 00:29:38.343278  525066 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1212 00:29:38.343285  525066 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1212 00:29:38.343472  525066 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1212 00:29:38.343488  525066 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1212 00:29:38.343494  525066 command_runner.go:130] > # This option supports live configuration reload.
	I1212 00:29:38.343668  525066 command_runner.go:130] > # rdt_config_file = ""
	I1212 00:29:38.343679  525066 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1212 00:29:38.343683  525066 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1212 00:29:38.343690  525066 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1212 00:29:38.343893  525066 command_runner.go:130] > # separate_pull_cgroup = ""
	I1212 00:29:38.343905  525066 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1212 00:29:38.343912  525066 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1212 00:29:38.343920  525066 command_runner.go:130] > # will be added.
	I1212 00:29:38.343925  525066 command_runner.go:130] > # default_capabilities = [
	I1212 00:29:38.344172  525066 command_runner.go:130] > # 	"CHOWN",
	I1212 00:29:38.344180  525066 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1212 00:29:38.344184  525066 command_runner.go:130] > # 	"FSETID",
	I1212 00:29:38.344187  525066 command_runner.go:130] > # 	"FOWNER",
	I1212 00:29:38.344191  525066 command_runner.go:130] > # 	"SETGID",
	I1212 00:29:38.344194  525066 command_runner.go:130] > # 	"SETUID",
	I1212 00:29:38.344217  525066 command_runner.go:130] > # 	"SETPCAP",
	I1212 00:29:38.344397  525066 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1212 00:29:38.344405  525066 command_runner.go:130] > # 	"KILL",
	I1212 00:29:38.344408  525066 command_runner.go:130] > # ]
	I1212 00:29:38.344417  525066 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1212 00:29:38.344424  525066 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1212 00:29:38.344614  525066 command_runner.go:130] > # add_inheritable_capabilities = false
	I1212 00:29:38.344634  525066 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1212 00:29:38.344641  525066 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1212 00:29:38.344645  525066 command_runner.go:130] > default_sysctls = [
	I1212 00:29:38.344818  525066 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1212 00:29:38.344834  525066 command_runner.go:130] > ]
	I1212 00:29:38.344839  525066 command_runner.go:130] > # List of devices on the host that a
	I1212 00:29:38.344846  525066 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1212 00:29:38.344850  525066 command_runner.go:130] > # allowed_devices = [
	I1212 00:29:38.345064  525066 command_runner.go:130] > # 	"/dev/fuse",
	I1212 00:29:38.345072  525066 command_runner.go:130] > # 	"/dev/net/tun",
	I1212 00:29:38.345076  525066 command_runner.go:130] > # ]
	I1212 00:29:38.345089  525066 command_runner.go:130] > # List of additional devices. specified as
	I1212 00:29:38.345098  525066 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1212 00:29:38.345141  525066 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1212 00:29:38.345151  525066 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1212 00:29:38.345155  525066 command_runner.go:130] > # additional_devices = [
	I1212 00:29:38.345354  525066 command_runner.go:130] > # ]
	I1212 00:29:38.345364  525066 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1212 00:29:38.345368  525066 command_runner.go:130] > # cdi_spec_dirs = [
	I1212 00:29:38.345371  525066 command_runner.go:130] > # 	"/etc/cdi",
	I1212 00:29:38.345585  525066 command_runner.go:130] > # 	"/var/run/cdi",
	I1212 00:29:38.345593  525066 command_runner.go:130] > # ]
	I1212 00:29:38.345600  525066 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1212 00:29:38.345606  525066 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1212 00:29:38.345609  525066 command_runner.go:130] > # Defaults to false.
	I1212 00:29:38.345614  525066 command_runner.go:130] > # device_ownership_from_security_context = false
	I1212 00:29:38.345652  525066 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1212 00:29:38.345661  525066 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1212 00:29:38.345665  525066 command_runner.go:130] > # hooks_dir = [
	I1212 00:29:38.345877  525066 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1212 00:29:38.345885  525066 command_runner.go:130] > # ]
	I1212 00:29:38.345892  525066 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1212 00:29:38.345899  525066 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1212 00:29:38.345904  525066 command_runner.go:130] > # its default mounts from the following two files:
	I1212 00:29:38.345907  525066 command_runner.go:130] > #
	I1212 00:29:38.345914  525066 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1212 00:29:38.345957  525066 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1212 00:29:38.345963  525066 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1212 00:29:38.345966  525066 command_runner.go:130] > #
	I1212 00:29:38.345972  525066 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1212 00:29:38.345979  525066 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1212 00:29:38.345986  525066 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1212 00:29:38.345991  525066 command_runner.go:130] > #      only add mounts it finds in this file.
	I1212 00:29:38.346020  525066 command_runner.go:130] > #
	I1212 00:29:38.346210  525066 command_runner.go:130] > # default_mounts_file = ""
	I1212 00:29:38.346221  525066 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1212 00:29:38.346228  525066 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1212 00:29:38.346444  525066 command_runner.go:130] > # pids_limit = -1
	I1212 00:29:38.346456  525066 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1212 00:29:38.346463  525066 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1212 00:29:38.346469  525066 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1212 00:29:38.346478  525066 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1212 00:29:38.346512  525066 command_runner.go:130] > # log_size_max = -1
	I1212 00:29:38.346523  525066 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1212 00:29:38.346724  525066 command_runner.go:130] > # log_to_journald = false
	I1212 00:29:38.346736  525066 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1212 00:29:38.346742  525066 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1212 00:29:38.346747  525066 command_runner.go:130] > # Path to directory for container attach sockets.
	I1212 00:29:38.347111  525066 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1212 00:29:38.347122  525066 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1212 00:29:38.347127  525066 command_runner.go:130] > # bind_mount_prefix = ""
	I1212 00:29:38.347132  525066 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1212 00:29:38.347136  525066 command_runner.go:130] > # read_only = false
	I1212 00:29:38.347142  525066 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1212 00:29:38.347149  525066 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1212 00:29:38.347186  525066 command_runner.go:130] > # live configuration reload.
	I1212 00:29:38.347359  525066 command_runner.go:130] > # log_level = "info"
	I1212 00:29:38.347376  525066 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1212 00:29:38.347381  525066 command_runner.go:130] > # This option supports live configuration reload.
	I1212 00:29:38.347597  525066 command_runner.go:130] > # log_filter = ""
	I1212 00:29:38.347608  525066 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1212 00:29:38.347615  525066 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1212 00:29:38.347619  525066 command_runner.go:130] > # separated by comma.
	I1212 00:29:38.347671  525066 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1212 00:29:38.347679  525066 command_runner.go:130] > # uid_mappings = ""
	I1212 00:29:38.347686  525066 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1212 00:29:38.347692  525066 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1212 00:29:38.347696  525066 command_runner.go:130] > # separated by comma.
	I1212 00:29:38.347704  525066 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1212 00:29:38.347707  525066 command_runner.go:130] > # gid_mappings = ""
	I1212 00:29:38.347714  525066 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1212 00:29:38.347746  525066 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1212 00:29:38.347757  525066 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1212 00:29:38.347765  525066 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1212 00:29:38.347769  525066 command_runner.go:130] > # minimum_mappable_uid = -1
	I1212 00:29:38.347775  525066 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1212 00:29:38.347781  525066 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1212 00:29:38.347787  525066 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1212 00:29:38.347822  525066 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1212 00:29:38.348158  525066 command_runner.go:130] > # minimum_mappable_gid = -1
	I1212 00:29:38.348170  525066 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1212 00:29:38.348176  525066 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1212 00:29:38.348182  525066 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1212 00:29:38.348415  525066 command_runner.go:130] > # ctr_stop_timeout = 30
	I1212 00:29:38.348427  525066 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1212 00:29:38.348433  525066 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1212 00:29:38.348438  525066 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1212 00:29:38.348442  525066 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1212 00:29:38.348641  525066 command_runner.go:130] > # drop_infra_ctr = true
	I1212 00:29:38.348653  525066 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1212 00:29:38.348659  525066 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1212 00:29:38.348666  525066 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1212 00:29:38.348674  525066 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1212 00:29:38.348712  525066 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1212 00:29:38.348725  525066 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1212 00:29:38.348731  525066 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1212 00:29:38.348736  525066 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1212 00:29:38.348935  525066 command_runner.go:130] > # shared_cpuset = ""
	I1212 00:29:38.348946  525066 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1212 00:29:38.348952  525066 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1212 00:29:38.348956  525066 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1212 00:29:38.348964  525066 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1212 00:29:38.349178  525066 command_runner.go:130] > # pinns_path = ""
	I1212 00:29:38.349189  525066 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1212 00:29:38.349195  525066 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1212 00:29:38.349199  525066 command_runner.go:130] > # enable_criu_support = true
	I1212 00:29:38.349214  525066 command_runner.go:130] > # Enable/disable the generation of the container,
	I1212 00:29:38.349253  525066 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1212 00:29:38.349272  525066 command_runner.go:130] > # enable_pod_events = false
	I1212 00:29:38.349291  525066 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1212 00:29:38.349322  525066 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1212 00:29:38.349505  525066 command_runner.go:130] > # default_runtime = "crun"
	I1212 00:29:38.349536  525066 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1212 00:29:38.349573  525066 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1212 00:29:38.349601  525066 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1212 00:29:38.349618  525066 command_runner.go:130] > # creation as a file is not desired either.
	I1212 00:29:38.349653  525066 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1212 00:29:38.349674  525066 command_runner.go:130] > # the hostname is being managed dynamically.
	I1212 00:29:38.349690  525066 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1212 00:29:38.349956  525066 command_runner.go:130] > # ]
	I1212 00:29:38.350003  525066 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1212 00:29:38.350025  525066 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1212 00:29:38.350043  525066 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1212 00:29:38.350074  525066 command_runner.go:130] > # Each entry in the table should follow the format:
	I1212 00:29:38.350093  525066 command_runner.go:130] > #
	I1212 00:29:38.350110  525066 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1212 00:29:38.350127  525066 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1212 00:29:38.350158  525066 command_runner.go:130] > # runtime_type = "oci"
	I1212 00:29:38.350179  525066 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1212 00:29:38.350201  525066 command_runner.go:130] > # inherit_default_runtime = false
	I1212 00:29:38.350218  525066 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1212 00:29:38.350253  525066 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1212 00:29:38.350271  525066 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1212 00:29:38.350287  525066 command_runner.go:130] > # monitor_env = []
	I1212 00:29:38.350317  525066 command_runner.go:130] > # privileged_without_host_devices = false
	I1212 00:29:38.350339  525066 command_runner.go:130] > # allowed_annotations = []
	I1212 00:29:38.350358  525066 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1212 00:29:38.350372  525066 command_runner.go:130] > # no_sync_log = false
	I1212 00:29:38.350402  525066 command_runner.go:130] > # default_annotations = {}
	I1212 00:29:38.350419  525066 command_runner.go:130] > # stream_websockets = false
	I1212 00:29:38.350436  525066 command_runner.go:130] > # seccomp_profile = ""
	I1212 00:29:38.350499  525066 command_runner.go:130] > # Where:
	I1212 00:29:38.350529  525066 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1212 00:29:38.350561  525066 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1212 00:29:38.350588  525066 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1212 00:29:38.350607  525066 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1212 00:29:38.350635  525066 command_runner.go:130] > #   in $PATH.
	I1212 00:29:38.350670  525066 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1212 00:29:38.350713  525066 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1212 00:29:38.350735  525066 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1212 00:29:38.350750  525066 command_runner.go:130] > #   state.
	I1212 00:29:38.350780  525066 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1212 00:29:38.350928  525066 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1212 00:29:38.351028  525066 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1212 00:29:38.351155  525066 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1212 00:29:38.351251  525066 command_runner.go:130] > #   the values from the default runtime on load time.
	I1212 00:29:38.351344  525066 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1212 00:29:38.351530  525066 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1212 00:29:38.351817  525066 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1212 00:29:38.352119  525066 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1212 00:29:38.352319  525066 command_runner.go:130] > #   The currently recognized values are:
	I1212 00:29:38.352557  525066 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1212 00:29:38.352766  525066 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1212 00:29:38.352929  525066 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1212 00:29:38.353036  525066 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1212 00:29:38.353153  525066 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1212 00:29:38.353519  525066 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1212 00:29:38.353569  525066 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1212 00:29:38.353580  525066 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1212 00:29:38.353587  525066 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1212 00:29:38.353593  525066 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1212 00:29:38.353637  525066 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1212 00:29:38.353645  525066 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1212 00:29:38.353652  525066 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1212 00:29:38.353658  525066 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1212 00:29:38.353664  525066 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1212 00:29:38.353679  525066 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1212 00:29:38.353695  525066 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1212 00:29:38.353699  525066 command_runner.go:130] > #   deprecated option "conmon".
	I1212 00:29:38.353706  525066 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1212 00:29:38.353766  525066 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1212 00:29:38.353805  525066 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1212 00:29:38.353814  525066 command_runner.go:130] > #   should be moved to the container's cgroup
	I1212 00:29:38.353822  525066 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1212 00:29:38.353826  525066 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1212 00:29:38.353834  525066 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1212 00:29:38.353838  525066 command_runner.go:130] > #   conmon-rs by using:
	I1212 00:29:38.353893  525066 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1212 00:29:38.353903  525066 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1212 00:29:38.353947  525066 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1212 00:29:38.353958  525066 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1212 00:29:38.353963  525066 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1212 00:29:38.353971  525066 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1212 00:29:38.353979  525066 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1212 00:29:38.353984  525066 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1212 00:29:38.353992  525066 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1212 00:29:38.354039  525066 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1212 00:29:38.354048  525066 command_runner.go:130] > #   when a machine crash happens.
	I1212 00:29:38.354056  525066 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1212 00:29:38.354064  525066 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1212 00:29:38.354100  525066 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1212 00:29:38.354106  525066 command_runner.go:130] > #   seccomp profile for the runtime.
	I1212 00:29:38.354113  525066 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1212 00:29:38.354120  525066 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1212 00:29:38.354123  525066 command_runner.go:130] > #
	I1212 00:29:38.354169  525066 command_runner.go:130] > # Using the seccomp notifier feature:
	I1212 00:29:38.354175  525066 command_runner.go:130] > #
	I1212 00:29:38.354188  525066 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1212 00:29:38.354195  525066 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1212 00:29:38.354198  525066 command_runner.go:130] > #
	I1212 00:29:38.354204  525066 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1212 00:29:38.354210  525066 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1212 00:29:38.354212  525066 command_runner.go:130] > #
	I1212 00:29:38.354258  525066 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1212 00:29:38.354270  525066 command_runner.go:130] > # feature.
	I1212 00:29:38.354273  525066 command_runner.go:130] > #
	I1212 00:29:38.354279  525066 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1212 00:29:38.354286  525066 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1212 00:29:38.354292  525066 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1212 00:29:38.354298  525066 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1212 00:29:38.354350  525066 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1212 00:29:38.354355  525066 command_runner.go:130] > #
	I1212 00:29:38.354362  525066 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1212 00:29:38.354402  525066 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1212 00:29:38.354408  525066 command_runner.go:130] > #
	I1212 00:29:38.354414  525066 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1212 00:29:38.354420  525066 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1212 00:29:38.354423  525066 command_runner.go:130] > #
	I1212 00:29:38.354429  525066 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1212 00:29:38.354471  525066 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1212 00:29:38.354477  525066 command_runner.go:130] > # limitation.
	I1212 00:29:38.354481  525066 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1212 00:29:38.354485  525066 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1212 00:29:38.354492  525066 command_runner.go:130] > runtime_type = ""
	I1212 00:29:38.354498  525066 command_runner.go:130] > runtime_root = "/run/crun"
	I1212 00:29:38.354502  525066 command_runner.go:130] > inherit_default_runtime = false
	I1212 00:29:38.354538  525066 command_runner.go:130] > runtime_config_path = ""
	I1212 00:29:38.354545  525066 command_runner.go:130] > container_min_memory = ""
	I1212 00:29:38.354550  525066 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1212 00:29:38.354554  525066 command_runner.go:130] > monitor_cgroup = "pod"
	I1212 00:29:38.354558  525066 command_runner.go:130] > monitor_exec_cgroup = ""
	I1212 00:29:38.354561  525066 command_runner.go:130] > allowed_annotations = [
	I1212 00:29:38.354565  525066 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1212 00:29:38.354568  525066 command_runner.go:130] > ]
	I1212 00:29:38.354573  525066 command_runner.go:130] > privileged_without_host_devices = false
	I1212 00:29:38.354577  525066 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1212 00:29:38.354588  525066 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1212 00:29:38.354592  525066 command_runner.go:130] > runtime_type = ""
	I1212 00:29:38.354595  525066 command_runner.go:130] > runtime_root = "/run/runc"
	I1212 00:29:38.354647  525066 command_runner.go:130] > inherit_default_runtime = false
	I1212 00:29:38.354654  525066 command_runner.go:130] > runtime_config_path = ""
	I1212 00:29:38.354659  525066 command_runner.go:130] > container_min_memory = ""
	I1212 00:29:38.354663  525066 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1212 00:29:38.354667  525066 command_runner.go:130] > monitor_cgroup = "pod"
	I1212 00:29:38.354671  525066 command_runner.go:130] > monitor_exec_cgroup = ""
	I1212 00:29:38.354675  525066 command_runner.go:130] > privileged_without_host_devices = false
	I1212 00:29:38.354692  525066 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1212 00:29:38.354700  525066 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1212 00:29:38.354706  525066 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1212 00:29:38.354719  525066 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1212 00:29:38.354731  525066 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1212 00:29:38.354778  525066 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1212 00:29:38.354787  525066 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1212 00:29:38.354793  525066 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1212 00:29:38.354803  525066 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1212 00:29:38.354848  525066 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1212 00:29:38.354862  525066 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1212 00:29:38.354870  525066 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1212 00:29:38.354909  525066 command_runner.go:130] > # Example:
	I1212 00:29:38.354916  525066 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1212 00:29:38.354921  525066 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1212 00:29:38.354929  525066 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1212 00:29:38.354970  525066 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1212 00:29:38.354976  525066 command_runner.go:130] > # cpuset = "0-1"
	I1212 00:29:38.354979  525066 command_runner.go:130] > # cpushares = "5"
	I1212 00:29:38.354982  525066 command_runner.go:130] > # cpuquota = "1000"
	I1212 00:29:38.354986  525066 command_runner.go:130] > # cpuperiod = "100000"
	I1212 00:29:38.354989  525066 command_runner.go:130] > # cpulimit = "35"
	I1212 00:29:38.354992  525066 command_runner.go:130] > # Where:
	I1212 00:29:38.355002  525066 command_runner.go:130] > # The workload name is workload-type.
	I1212 00:29:38.355009  525066 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1212 00:29:38.355015  525066 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1212 00:29:38.355066  525066 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1212 00:29:38.355077  525066 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1212 00:29:38.355083  525066 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1212 00:29:38.355088  525066 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1212 00:29:38.355095  525066 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1212 00:29:38.355099  525066 command_runner.go:130] > # Default value is set to true
	I1212 00:29:38.355467  525066 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1212 00:29:38.355620  525066 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1212 00:29:38.355721  525066 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1212 00:29:38.355871  525066 command_runner.go:130] > # Default value is set to 'false'
	I1212 00:29:38.356033  525066 command_runner.go:130] > # disable_hostport_mapping = false
	I1212 00:29:38.356163  525066 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1212 00:29:38.356284  525066 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1212 00:29:38.356367  525066 command_runner.go:130] > # timezone = ""
	I1212 00:29:38.356485  525066 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1212 00:29:38.356560  525066 command_runner.go:130] > #
	I1212 00:29:38.356636  525066 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1212 00:29:38.356830  525066 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1212 00:29:38.356937  525066 command_runner.go:130] > [crio.image]
	I1212 00:29:38.357065  525066 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1212 00:29:38.357172  525066 command_runner.go:130] > # default_transport = "docker://"
	I1212 00:29:38.357258  525066 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1212 00:29:38.357455  525066 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1212 00:29:38.357729  525066 command_runner.go:130] > # global_auth_file = ""
	I1212 00:29:38.357787  525066 command_runner.go:130] > # The image used to instantiate infra containers.
	I1212 00:29:38.357796  525066 command_runner.go:130] > # This option supports live configuration reload.
	I1212 00:29:38.357801  525066 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1212 00:29:38.357809  525066 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1212 00:29:38.357821  525066 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1212 00:29:38.357827  525066 command_runner.go:130] > # This option supports live configuration reload.
	I1212 00:29:38.357837  525066 command_runner.go:130] > # pause_image_auth_file = ""
	I1212 00:29:38.357843  525066 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1212 00:29:38.357850  525066 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1212 00:29:38.358627  525066 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1212 00:29:38.358638  525066 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1212 00:29:38.358643  525066 command_runner.go:130] > # pause_command = "/pause"
	I1212 00:29:38.358649  525066 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1212 00:29:38.358655  525066 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1212 00:29:38.358662  525066 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1212 00:29:38.358668  525066 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1212 00:29:38.358674  525066 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1212 00:29:38.358693  525066 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1212 00:29:38.358700  525066 command_runner.go:130] > # pinned_images = [
	I1212 00:29:38.358703  525066 command_runner.go:130] > # ]
	I1212 00:29:38.358709  525066 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1212 00:29:38.358716  525066 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1212 00:29:38.358723  525066 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1212 00:29:38.358729  525066 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1212 00:29:38.358734  525066 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1212 00:29:38.358740  525066 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1212 00:29:38.358745  525066 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1212 00:29:38.358752  525066 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1212 00:29:38.358758  525066 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1212 00:29:38.358764  525066 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1212 00:29:38.358771  525066 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1212 00:29:38.358776  525066 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1212 00:29:38.358782  525066 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1212 00:29:38.358788  525066 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1212 00:29:38.358791  525066 command_runner.go:130] > # changing them here.
	I1212 00:29:38.358801  525066 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1212 00:29:38.358805  525066 command_runner.go:130] > # insecure_registries = [
	I1212 00:29:38.358808  525066 command_runner.go:130] > # ]
	I1212 00:29:38.358814  525066 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1212 00:29:38.358828  525066 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1212 00:29:38.358833  525066 command_runner.go:130] > # image_volumes = "mkdir"
	I1212 00:29:38.358838  525066 command_runner.go:130] > # Temporary directory to use for storing big files
	I1212 00:29:38.358842  525066 command_runner.go:130] > # big_files_temporary_dir = ""
	I1212 00:29:38.358848  525066 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1212 00:29:38.358855  525066 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1212 00:29:38.358860  525066 command_runner.go:130] > # auto_reload_registries = false
	I1212 00:29:38.358866  525066 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1212 00:29:38.358874  525066 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1212 00:29:38.358881  525066 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1212 00:29:38.358885  525066 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1212 00:29:38.358889  525066 command_runner.go:130] > # The mode of short name resolution.
	I1212 00:29:38.358896  525066 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1212 00:29:38.358903  525066 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1212 00:29:38.358908  525066 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1212 00:29:38.358913  525066 command_runner.go:130] > # short_name_mode = "enforcing"
	I1212 00:29:38.358919  525066 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1212 00:29:38.358925  525066 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1212 00:29:38.358929  525066 command_runner.go:130] > # oci_artifact_mount_support = true
	I1212 00:29:38.358935  525066 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1212 00:29:38.358938  525066 command_runner.go:130] > # CNI plugins.
	I1212 00:29:38.358941  525066 command_runner.go:130] > [crio.network]
	I1212 00:29:38.358947  525066 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1212 00:29:38.358952  525066 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1212 00:29:38.358956  525066 command_runner.go:130] > # cni_default_network = ""
	I1212 00:29:38.358966  525066 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1212 00:29:38.358970  525066 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1212 00:29:38.358975  525066 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1212 00:29:38.358979  525066 command_runner.go:130] > # plugin_dirs = [
	I1212 00:29:38.358982  525066 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1212 00:29:38.358985  525066 command_runner.go:130] > # ]
	I1212 00:29:38.358989  525066 command_runner.go:130] > # List of included pod metrics.
	I1212 00:29:38.358993  525066 command_runner.go:130] > # included_pod_metrics = [
	I1212 00:29:38.359000  525066 command_runner.go:130] > # ]
	I1212 00:29:38.359005  525066 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1212 00:29:38.359010  525066 command_runner.go:130] > [crio.metrics]
	I1212 00:29:38.359017  525066 command_runner.go:130] > # Globally enable or disable metrics support.
	I1212 00:29:38.359024  525066 command_runner.go:130] > # enable_metrics = false
	I1212 00:29:38.359029  525066 command_runner.go:130] > # Specify enabled metrics collectors.
	I1212 00:29:38.359034  525066 command_runner.go:130] > # Per default all metrics are enabled.
	I1212 00:29:38.359040  525066 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1212 00:29:38.359048  525066 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1212 00:29:38.359054  525066 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1212 00:29:38.359068  525066 command_runner.go:130] > # metrics_collectors = [
	I1212 00:29:38.359072  525066 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1212 00:29:38.359076  525066 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1212 00:29:38.359079  525066 command_runner.go:130] > # 	"containers_oom_total",
	I1212 00:29:38.359083  525066 command_runner.go:130] > # 	"processes_defunct",
	I1212 00:29:38.359087  525066 command_runner.go:130] > # 	"operations_total",
	I1212 00:29:38.359091  525066 command_runner.go:130] > # 	"operations_latency_seconds",
	I1212 00:29:38.359095  525066 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1212 00:29:38.359099  525066 command_runner.go:130] > # 	"operations_errors_total",
	I1212 00:29:38.359103  525066 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1212 00:29:38.359107  525066 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1212 00:29:38.359111  525066 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1212 00:29:38.359115  525066 command_runner.go:130] > # 	"image_pulls_success_total",
	I1212 00:29:38.359119  525066 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1212 00:29:38.359123  525066 command_runner.go:130] > # 	"containers_oom_count_total",
	I1212 00:29:38.359128  525066 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1212 00:29:38.359132  525066 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1212 00:29:38.359137  525066 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1212 00:29:38.359139  525066 command_runner.go:130] > # ]
	I1212 00:29:38.359145  525066 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1212 00:29:38.359149  525066 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1212 00:29:38.359155  525066 command_runner.go:130] > # The port on which the metrics server will listen.
	I1212 00:29:38.359158  525066 command_runner.go:130] > # metrics_port = 9090
	I1212 00:29:38.359167  525066 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1212 00:29:38.359171  525066 command_runner.go:130] > # metrics_socket = ""
	I1212 00:29:38.359176  525066 command_runner.go:130] > # The certificate for the secure metrics server.
	I1212 00:29:38.359182  525066 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1212 00:29:38.359188  525066 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1212 00:29:38.359192  525066 command_runner.go:130] > # certificate on any modification event.
	I1212 00:29:38.359196  525066 command_runner.go:130] > # metrics_cert = ""
	I1212 00:29:38.359201  525066 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1212 00:29:38.359206  525066 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1212 00:29:38.359209  525066 command_runner.go:130] > # metrics_key = ""
	I1212 00:29:38.359214  525066 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1212 00:29:38.359218  525066 command_runner.go:130] > [crio.tracing]
	I1212 00:29:38.359224  525066 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1212 00:29:38.359227  525066 command_runner.go:130] > # enable_tracing = false
	I1212 00:29:38.359233  525066 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1212 00:29:38.359237  525066 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1212 00:29:38.359243  525066 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1212 00:29:38.359249  525066 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1212 00:29:38.359253  525066 command_runner.go:130] > # CRI-O NRI configuration.
	I1212 00:29:38.359256  525066 command_runner.go:130] > [crio.nri]
	I1212 00:29:38.359260  525066 command_runner.go:130] > # Globally enable or disable NRI.
	I1212 00:29:38.359458  525066 command_runner.go:130] > # enable_nri = true
	I1212 00:29:38.359492  525066 command_runner.go:130] > # NRI socket to listen on.
	I1212 00:29:38.359531  525066 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1212 00:29:38.359552  525066 command_runner.go:130] > # NRI plugin directory to use.
	I1212 00:29:38.359571  525066 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1212 00:29:38.359603  525066 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1212 00:29:38.359625  525066 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1212 00:29:38.359646  525066 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1212 00:29:38.359766  525066 command_runner.go:130] > # nri_disable_connections = false
	I1212 00:29:38.359799  525066 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1212 00:29:38.359833  525066 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1212 00:29:38.359860  525066 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1212 00:29:38.359876  525066 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1212 00:29:38.359893  525066 command_runner.go:130] > # NRI default validator configuration.
	I1212 00:29:38.359933  525066 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1212 00:29:38.359959  525066 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1212 00:29:38.359990  525066 command_runner.go:130] > # can be restricted/rejected:
	I1212 00:29:38.360015  525066 command_runner.go:130] > # - OCI hook injection
	I1212 00:29:38.360033  525066 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1212 00:29:38.360064  525066 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1212 00:29:38.360089  525066 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1212 00:29:38.360107  525066 command_runner.go:130] > # - adjustment of linux namespaces
	I1212 00:29:38.360127  525066 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1212 00:29:38.360166  525066 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1212 00:29:38.360186  525066 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1212 00:29:38.360201  525066 command_runner.go:130] > #
	I1212 00:29:38.360237  525066 command_runner.go:130] > # [crio.nri.default_validator]
	I1212 00:29:38.360255  525066 command_runner.go:130] > # nri_enable_default_validator = false
	I1212 00:29:38.360272  525066 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1212 00:29:38.360303  525066 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1212 00:29:38.360330  525066 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1212 00:29:38.360348  525066 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1212 00:29:38.360476  525066 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1212 00:29:38.360648  525066 command_runner.go:130] > # nri_validator_required_plugins = [
	I1212 00:29:38.360681  525066 command_runner.go:130] > # ]
	I1212 00:29:38.360704  525066 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1212 00:29:38.360740  525066 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1212 00:29:38.360764  525066 command_runner.go:130] > [crio.stats]
	I1212 00:29:38.360783  525066 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1212 00:29:38.360814  525066 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1212 00:29:38.360847  525066 command_runner.go:130] > # stats_collection_period = 0
	I1212 00:29:38.360867  525066 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1212 00:29:38.360905  525066 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1212 00:29:38.360921  525066 command_runner.go:130] > # collection_period = 0
	I1212 00:29:38.360984  525066 command_runner.go:130] ! time="2025-12-12T00:29:38.313366715Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1212 00:29:38.361015  525066 command_runner.go:130] ! time="2025-12-12T00:29:38.313641917Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1212 00:29:38.361052  525066 command_runner.go:130] ! time="2025-12-12T00:29:38.313871475Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1212 00:29:38.361075  525066 command_runner.go:130] ! time="2025-12-12T00:29:38.314022397Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1212 00:29:38.361124  525066 command_runner.go:130] ! time="2025-12-12T00:29:38.314372427Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:29:38.361154  525066 command_runner.go:130] ! time="2025-12-12T00:29:38.31485409Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1212 00:29:38.361178  525066 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1212 00:29:38.361311  525066 cni.go:84] Creating CNI manager for ""
	I1212 00:29:38.361353  525066 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 00:29:38.361385  525066 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 00:29:38.361436  525066 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-035643 NodeName:functional-035643 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 00:29:38.361629  525066 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-035643"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 00:29:38.361753  525066 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 00:29:38.369085  525066 command_runner.go:130] > kubeadm
	I1212 00:29:38.369101  525066 command_runner.go:130] > kubectl
	I1212 00:29:38.369105  525066 command_runner.go:130] > kubelet
	I1212 00:29:38.369321  525066 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 00:29:38.369385  525066 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 00:29:38.376829  525066 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1212 00:29:38.389638  525066 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1212 00:29:38.402701  525066 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1212 00:29:38.415693  525066 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1212 00:29:38.420581  525066 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1212 00:29:38.420662  525066 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:29:38.566232  525066 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:29:39.219049  525066 certs.go:69] Setting up /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643 for IP: 192.168.49.2
	I1212 00:29:39.219079  525066 certs.go:195] generating shared ca certs ...
	I1212 00:29:39.219096  525066 certs.go:227] acquiring lock for ca certs: {Name:mk856824cf2126fa3d2975ef18e195b6ab1234f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:29:39.219238  525066 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22101-487723/.minikube/ca.key
	I1212 00:29:39.219285  525066 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22101-487723/.minikube/proxy-client-ca.key
	I1212 00:29:39.219292  525066 certs.go:257] generating profile certs ...
	I1212 00:29:39.219491  525066 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/client.key
	I1212 00:29:39.219603  525066 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/apiserver.key.8a9a2493
	I1212 00:29:39.219699  525066 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/proxy-client.key
	I1212 00:29:39.219742  525066 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 00:29:39.219761  525066 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1212 00:29:39.219773  525066 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 00:29:39.219783  525066 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 00:29:39.219798  525066 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1212 00:29:39.219843  525066 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1212 00:29:39.219860  525066 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1212 00:29:39.219871  525066 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1212 00:29:39.219967  525066 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/490954.pem (1338 bytes)
	W1212 00:29:39.220038  525066 certs.go:480] ignoring /home/jenkins/minikube-integration/22101-487723/.minikube/certs/490954_empty.pem, impossibly tiny 0 bytes
	I1212 00:29:39.220049  525066 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca-key.pem (1679 bytes)
	I1212 00:29:39.220117  525066 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca.pem (1078 bytes)
	I1212 00:29:39.220147  525066 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/cert.pem (1123 bytes)
	I1212 00:29:39.220202  525066 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/key.pem (1679 bytes)
	I1212 00:29:39.220256  525066 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-487723/.minikube/files/etc/ssl/certs/4909542.pem (1708 bytes)
	I1212 00:29:39.220332  525066 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/490954.pem -> /usr/share/ca-certificates/490954.pem
	I1212 00:29:39.220378  525066 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/files/etc/ssl/certs/4909542.pem -> /usr/share/ca-certificates/4909542.pem
	I1212 00:29:39.220396  525066 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:29:39.221003  525066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 00:29:39.242927  525066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 00:29:39.262484  525066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 00:29:39.285732  525066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 00:29:39.303346  525066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1212 00:29:39.320786  525066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 00:29:39.338821  525066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 00:29:39.356806  525066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 00:29:39.374381  525066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/certs/490954.pem --> /usr/share/ca-certificates/490954.pem (1338 bytes)
	I1212 00:29:39.392333  525066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/files/etc/ssl/certs/4909542.pem --> /usr/share/ca-certificates/4909542.pem (1708 bytes)
	I1212 00:29:39.410089  525066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 00:29:39.427383  525066 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 00:29:39.439725  525066 ssh_runner.go:195] Run: openssl version
	I1212 00:29:39.445636  525066 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1212 00:29:39.445982  525066 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/490954.pem
	I1212 00:29:39.453236  525066 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/490954.pem /etc/ssl/certs/490954.pem
	I1212 00:29:39.460672  525066 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/490954.pem
	I1212 00:29:39.464184  525066 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 12 00:21 /usr/share/ca-certificates/490954.pem
	I1212 00:29:39.464289  525066 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 00:21 /usr/share/ca-certificates/490954.pem
	I1212 00:29:39.464344  525066 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/490954.pem
	I1212 00:29:39.505960  525066 command_runner.go:130] > 51391683
	I1212 00:29:39.506560  525066 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 00:29:39.514611  525066 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4909542.pem
	I1212 00:29:39.522360  525066 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4909542.pem /etc/ssl/certs/4909542.pem
	I1212 00:29:39.531109  525066 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4909542.pem
	I1212 00:29:39.534913  525066 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 12 00:21 /usr/share/ca-certificates/4909542.pem
	I1212 00:29:39.535312  525066 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 00:21 /usr/share/ca-certificates/4909542.pem
	I1212 00:29:39.535374  525066 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4909542.pem
	I1212 00:29:39.578207  525066 command_runner.go:130] > 3ec20f2e
	I1212 00:29:39.578374  525066 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 00:29:39.586281  525066 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:29:39.593845  525066 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 00:29:39.601415  525066 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:29:39.605435  525066 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 12 00:11 /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:29:39.605483  525066 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 00:11 /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:29:39.605537  525066 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:29:39.646250  525066 command_runner.go:130] > b5213941
	I1212 00:29:39.646757  525066 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 00:29:39.654391  525066 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 00:29:39.658287  525066 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 00:29:39.658314  525066 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1212 00:29:39.658322  525066 command_runner.go:130] > Device: 259,1	Inode: 2360480     Links: 1
	I1212 00:29:39.658330  525066 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 00:29:39.658336  525066 command_runner.go:130] > Access: 2025-12-12 00:25:30.972268820 +0000
	I1212 00:29:39.658341  525066 command_runner.go:130] > Modify: 2025-12-12 00:21:25.329898534 +0000
	I1212 00:29:39.658346  525066 command_runner.go:130] > Change: 2025-12-12 00:21:25.329898534 +0000
	I1212 00:29:39.658351  525066 command_runner.go:130] >  Birth: 2025-12-12 00:21:25.329898534 +0000
	I1212 00:29:39.658416  525066 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 00:29:39.699997  525066 command_runner.go:130] > Certificate will not expire
	I1212 00:29:39.700109  525066 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 00:29:39.748952  525066 command_runner.go:130] > Certificate will not expire
	I1212 00:29:39.749499  525066 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 00:29:39.797710  525066 command_runner.go:130] > Certificate will not expire
	I1212 00:29:39.798154  525066 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 00:29:39.843103  525066 command_runner.go:130] > Certificate will not expire
	I1212 00:29:39.843601  525066 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 00:29:39.887374  525066 command_runner.go:130] > Certificate will not expire
	I1212 00:29:39.887871  525066 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 00:29:39.942362  525066 command_runner.go:130] > Certificate will not expire
	I1212 00:29:39.942946  525066 kubeadm.go:401] StartCluster: {Name:functional-035643 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-035643 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:29:39.943046  525066 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 00:29:39.943208  525066 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 00:29:39.985575  525066 cri.go:89] found id: ""
	I1212 00:29:39.985700  525066 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 00:29:39.993609  525066 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1212 00:29:39.993681  525066 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1212 00:29:39.993702  525066 command_runner.go:130] > /var/lib/minikube/etcd:
	I1212 00:29:39.994895  525066 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1212 00:29:39.994945  525066 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1212 00:29:39.995038  525066 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 00:29:40.006978  525066 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:29:40.007554  525066 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-035643" does not appear in /home/jenkins/minikube-integration/22101-487723/kubeconfig
	I1212 00:29:40.007785  525066 kubeconfig.go:62] /home/jenkins/minikube-integration/22101-487723/kubeconfig needs updating (will repair): [kubeconfig missing "functional-035643" cluster setting kubeconfig missing "functional-035643" context setting]
	I1212 00:29:40.008175  525066 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-487723/kubeconfig: {Name:mk40d877648a1b47389942ad828ec218ac64f642 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:29:40.008787  525066 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22101-487723/kubeconfig
	I1212 00:29:40.009179  525066 kapi.go:59] client config for functional-035643: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/client.crt", KeyFile:"/home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/client.key", CAFile:"/home/jenkins/minikube-integration/22101-487723/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 00:29:40.009975  525066 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1212 00:29:40.010118  525066 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1212 00:29:40.010148  525066 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1212 00:29:40.010168  525066 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1212 00:29:40.010204  525066 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1212 00:29:40.010223  525066 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1212 00:29:40.010646  525066 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 00:29:40.025803  525066 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1212 00:29:40.025893  525066 kubeadm.go:602] duration metric: took 30.929693ms to restartPrimaryControlPlane
	I1212 00:29:40.025918  525066 kubeadm.go:403] duration metric: took 82.978705ms to StartCluster
	I1212 00:29:40.025961  525066 settings.go:142] acquiring lock: {Name:mk274c10b2238dc32d72b68ac2e1ec517b8a72b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:29:40.026057  525066 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22101-487723/kubeconfig
	I1212 00:29:40.026847  525066 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-487723/kubeconfig: {Name:mk40d877648a1b47389942ad828ec218ac64f642 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:29:40.027182  525066 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 00:29:40.027614  525066 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 00:29:40.027718  525066 addons.go:70] Setting storage-provisioner=true in profile "functional-035643"
	I1212 00:29:40.027733  525066 addons.go:239] Setting addon storage-provisioner=true in "functional-035643"
	I1212 00:29:40.027759  525066 host.go:66] Checking if "functional-035643" exists ...
	I1212 00:29:40.027683  525066 config.go:182] Loaded profile config "functional-035643": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 00:29:40.027963  525066 addons.go:70] Setting default-storageclass=true in profile "functional-035643"
	I1212 00:29:40.028014  525066 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-035643"
	I1212 00:29:40.028265  525066 cli_runner.go:164] Run: docker container inspect functional-035643 --format={{.State.Status}}
	I1212 00:29:40.028431  525066 cli_runner.go:164] Run: docker container inspect functional-035643 --format={{.State.Status}}
	I1212 00:29:40.031408  525066 out.go:179] * Verifying Kubernetes components...
	I1212 00:29:40.035144  525066 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:29:40.072983  525066 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22101-487723/kubeconfig
	I1212 00:29:40.073191  525066 kapi.go:59] client config for functional-035643: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/client.crt", KeyFile:"/home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/client.key", CAFile:"/home/jenkins/minikube-integration/22101-487723/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 00:29:40.073564  525066 addons.go:239] Setting addon default-storageclass=true in "functional-035643"
	I1212 00:29:40.073635  525066 host.go:66] Checking if "functional-035643" exists ...
	I1212 00:29:40.074143  525066 cli_runner.go:164] Run: docker container inspect functional-035643 --format={{.State.Status}}
	I1212 00:29:40.079735  525066 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 00:29:40.083203  525066 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:29:40.083224  525066 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 00:29:40.083308  525066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
	I1212 00:29:40.126926  525066 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 00:29:40.126953  525066 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 00:29:40.127024  525066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
	I1212 00:29:40.157562  525066 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/functional-035643/id_rsa Username:docker}
	I1212 00:29:40.176759  525066 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/functional-035643/id_rsa Username:docker}
	I1212 00:29:40.228329  525066 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:29:40.297459  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:29:40.324896  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:29:40.970121  525066 node_ready.go:35] waiting up to 6m0s for node "functional-035643" to be "Ready" ...
	I1212 00:29:40.970322  525066 type.go:168] "Request Body" body=""
	I1212 00:29:40.970407  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:40.970561  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:29:40.970616  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:40.970718  525066 retry.go:31] will retry after 204.18222ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:40.970890  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:29:40.970976  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:40.971113  525066 retry.go:31] will retry after 159.994769ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:40.971100  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:41.131658  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:29:41.175423  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:29:41.193550  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:29:41.193607  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:41.193625  525066 retry.go:31] will retry after 255.861028ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:41.245543  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:29:41.245583  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:41.245622  525066 retry.go:31] will retry after 363.545377ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:41.449762  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:29:41.471214  525066 type.go:168] "Request Body" body=""
	I1212 00:29:41.471319  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:41.471599  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:41.515695  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:29:41.515762  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:41.515785  525066 retry.go:31] will retry after 558.343872ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:41.610204  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:29:41.681946  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:29:41.682005  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:41.682029  525066 retry.go:31] will retry after 553.13192ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:41.971401  525066 type.go:168] "Request Body" body=""
	I1212 00:29:41.971545  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:41.971960  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:42.075338  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:29:42.153789  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:29:42.153831  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:42.153875  525066 retry.go:31] will retry after 562.779161ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:42.238244  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:29:42.309134  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:29:42.309235  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:42.309278  525066 retry.go:31] will retry after 839.848798ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:42.470350  525066 type.go:168] "Request Body" body=""
	I1212 00:29:42.470438  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:42.470717  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:42.717299  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:29:42.779260  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:29:42.779300  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:42.779319  525066 retry.go:31] will retry after 1.384955704s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:42.970802  525066 type.go:168] "Request Body" body=""
	I1212 00:29:42.970878  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:42.971167  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:29:42.971212  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:29:43.149494  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:29:43.213920  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:29:43.218125  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:43.218200  525066 retry.go:31] will retry after 1.154245365s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:43.470517  525066 type.go:168] "Request Body" body=""
	I1212 00:29:43.470604  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:43.470922  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:43.970580  525066 type.go:168] "Request Body" body=""
	I1212 00:29:43.970743  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:43.971073  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:44.165470  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:29:44.225816  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:29:44.225880  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:44.225901  525066 retry.go:31] will retry after 2.063043455s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:44.373318  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:29:44.437999  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:29:44.441831  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:44.441865  525066 retry.go:31] will retry after 1.856604218s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:44.471071  525066 type.go:168] "Request Body" body=""
	I1212 00:29:44.471144  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:44.471437  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:44.971289  525066 type.go:168] "Request Body" body=""
	I1212 00:29:44.971379  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:44.971730  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:29:44.971780  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:29:45.470482  525066 type.go:168] "Request Body" body=""
	I1212 00:29:45.470622  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:45.470959  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:45.970491  525066 type.go:168] "Request Body" body=""
	I1212 00:29:45.970565  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:45.970940  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:46.289221  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:29:46.298644  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:29:46.387298  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:29:46.387341  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:46.387359  525066 retry.go:31] will retry after 2.162137781s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:46.389923  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:29:46.389964  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:46.389984  525066 retry.go:31] will retry after 2.885458194s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:46.471167  525066 type.go:168] "Request Body" body=""
	I1212 00:29:46.471247  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:46.471565  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:46.971278  525066 type.go:168] "Request Body" body=""
	I1212 00:29:46.971393  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:46.971713  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:29:46.971800  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:29:47.471406  525066 type.go:168] "Request Body" body=""
	I1212 00:29:47.471481  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:47.471794  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:47.970503  525066 type.go:168] "Request Body" body=""
	I1212 00:29:47.970590  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:47.970978  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:48.470459  525066 type.go:168] "Request Body" body=""
	I1212 00:29:48.470563  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:48.470882  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:48.550228  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:29:48.609468  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:29:48.609564  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:48.609586  525066 retry.go:31] will retry after 5.142469671s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:48.970999  525066 type.go:168] "Request Body" body=""
	I1212 00:29:48.971081  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:48.971378  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:49.275822  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:29:49.338921  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:29:49.338964  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:49.338982  525066 retry.go:31] will retry after 3.130992497s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:49.471334  525066 type.go:168] "Request Body" body=""
	I1212 00:29:49.471407  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:49.471715  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:29:49.471774  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:29:49.970357  525066 type.go:168] "Request Body" body=""
	I1212 00:29:49.970428  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:49.970800  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:50.470449  525066 type.go:168] "Request Body" body=""
	I1212 00:29:50.470521  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:50.470885  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:50.970632  525066 type.go:168] "Request Body" body=""
	I1212 00:29:50.970736  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:50.971135  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:51.470850  525066 type.go:168] "Request Body" body=""
	I1212 00:29:51.470934  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:51.471301  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:51.971160  525066 type.go:168] "Request Body" body=""
	I1212 00:29:51.971232  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:51.971562  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:29:51.971629  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:29:52.470175  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:29:52.470342  525066 type.go:168] "Request Body" body=""
	I1212 00:29:52.470395  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:52.470704  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:52.525865  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:29:52.529169  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:52.529199  525066 retry.go:31] will retry after 5.202817608s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:52.970512  525066 type.go:168] "Request Body" body=""
	I1212 00:29:52.970577  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:52.970929  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:53.470488  525066 type.go:168] "Request Body" body=""
	I1212 00:29:53.470560  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:53.470915  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:53.752286  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:29:53.818071  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:29:53.818120  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:53.818138  525066 retry.go:31] will retry after 7.493688168s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:53.970432  525066 type.go:168] "Request Body" body=""
	I1212 00:29:53.970529  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:53.970820  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:54.470420  525066 type.go:168] "Request Body" body=""
	I1212 00:29:54.470487  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:54.470795  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:29:54.470851  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:29:54.970811  525066 type.go:168] "Request Body" body=""
	I1212 00:29:54.970890  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:54.971241  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:55.471081  525066 type.go:168] "Request Body" body=""
	I1212 00:29:55.471155  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:55.471463  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:55.971189  525066 type.go:168] "Request Body" body=""
	I1212 00:29:55.971259  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:55.971627  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:56.470367  525066 type.go:168] "Request Body" body=""
	I1212 00:29:56.470446  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:56.470766  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:56.970401  525066 type.go:168] "Request Body" body=""
	I1212 00:29:56.970473  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:56.970813  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:29:56.970885  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:29:57.470373  525066 type.go:168] "Request Body" body=""
	I1212 00:29:57.470446  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:57.470716  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:57.732201  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:29:57.788085  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:29:57.792139  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:57.792170  525066 retry.go:31] will retry after 6.658571386s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:57.970423  525066 type.go:168] "Request Body" body=""
	I1212 00:29:57.970495  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:57.970833  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:58.470545  525066 type.go:168] "Request Body" body=""
	I1212 00:29:58.470620  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:58.470971  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:58.970653  525066 type.go:168] "Request Body" body=""
	I1212 00:29:58.970748  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:58.971004  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:29:58.971063  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:29:59.470479  525066 type.go:168] "Request Body" body=""
	I1212 00:29:59.470553  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:59.470886  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:59.970903  525066 type.go:168] "Request Body" body=""
	I1212 00:29:59.970985  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:59.971299  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:00.470879  525066 type.go:168] "Request Body" body=""
	I1212 00:30:00.470978  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:00.471351  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:00.971213  525066 type.go:168] "Request Body" body=""
	I1212 00:30:00.971307  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:00.971736  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:00.971826  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:01.312112  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:30:01.378306  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:30:01.384542  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:30:01.384581  525066 retry.go:31] will retry after 9.383564416s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:30:01.470976  525066 type.go:168] "Request Body" body=""
	I1212 00:30:01.471119  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:01.471452  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:01.971252  525066 type.go:168] "Request Body" body=""
	I1212 00:30:01.971351  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:01.971665  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:02.470427  525066 type.go:168] "Request Body" body=""
	I1212 00:30:02.470507  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:02.470916  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:02.970616  525066 type.go:168] "Request Body" body=""
	I1212 00:30:02.970721  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:02.971066  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:03.470621  525066 type.go:168] "Request Body" body=""
	I1212 00:30:03.470716  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:03.470992  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:03.471037  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:03.970767  525066 type.go:168] "Request Body" body=""
	I1212 00:30:03.970845  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:03.971214  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:04.450915  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:30:04.471249  525066 type.go:168] "Request Body" body=""
	I1212 00:30:04.471318  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:04.471581  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:04.504992  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:30:04.508551  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:30:04.508584  525066 retry.go:31] will retry after 16.635241248s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:30:04.971271  525066 type.go:168] "Request Body" body=""
	I1212 00:30:04.971364  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:04.971628  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:05.470444  525066 type.go:168] "Request Body" body=""
	I1212 00:30:05.470516  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:05.470888  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:05.970469  525066 type.go:168] "Request Body" body=""
	I1212 00:30:05.970547  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:05.970907  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:05.970959  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:06.470384  525066 type.go:168] "Request Body" body=""
	I1212 00:30:06.470477  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:06.470800  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:06.970490  525066 type.go:168] "Request Body" body=""
	I1212 00:30:06.970569  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:06.970938  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:07.470503  525066 type.go:168] "Request Body" body=""
	I1212 00:30:07.470599  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:07.470930  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:07.970442  525066 type.go:168] "Request Body" body=""
	I1212 00:30:07.970519  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:07.970789  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:08.470442  525066 type.go:168] "Request Body" body=""
	I1212 00:30:08.470518  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:08.470850  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:08.470905  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:08.970456  525066 type.go:168] "Request Body" body=""
	I1212 00:30:08.970529  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:08.970891  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:09.470554  525066 type.go:168] "Request Body" body=""
	I1212 00:30:09.470632  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:09.470900  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:09.970929  525066 type.go:168] "Request Body" body=""
	I1212 00:30:09.971012  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:09.971327  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:10.470376  525066 type.go:168] "Request Body" body=""
	I1212 00:30:10.470457  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:10.470750  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:10.768281  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:30:10.825103  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:30:10.828984  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:30:10.829014  525066 retry.go:31] will retry after 8.149625317s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:30:10.971311  525066 type.go:168] "Request Body" body=""
	I1212 00:30:10.971379  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:10.971644  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:10.971683  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:11.470436  525066 type.go:168] "Request Body" body=""
	I1212 00:30:11.470511  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:11.470843  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:11.970432  525066 type.go:168] "Request Body" body=""
	I1212 00:30:11.970527  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:11.970846  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:12.470525  525066 type.go:168] "Request Body" body=""
	I1212 00:30:12.470603  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:12.470941  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:12.970791  525066 type.go:168] "Request Body" body=""
	I1212 00:30:12.970866  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:12.971205  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:13.470475  525066 type.go:168] "Request Body" body=""
	I1212 00:30:13.470560  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:13.470875  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:13.470931  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:13.970549  525066 type.go:168] "Request Body" body=""
	I1212 00:30:13.970621  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:13.970905  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:14.470469  525066 type.go:168] "Request Body" body=""
	I1212 00:30:14.470545  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:14.470911  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:14.970901  525066 type.go:168] "Request Body" body=""
	I1212 00:30:14.970980  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:14.971358  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:15.471006  525066 type.go:168] "Request Body" body=""
	I1212 00:30:15.471085  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:15.471350  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:15.471390  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:15.971171  525066 type.go:168] "Request Body" body=""
	I1212 00:30:15.971259  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:15.971595  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:16.471255  525066 type.go:168] "Request Body" body=""
	I1212 00:30:16.471330  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:16.471636  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:16.970382  525066 type.go:168] "Request Body" body=""
	I1212 00:30:16.970466  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:16.970768  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:17.470446  525066 type.go:168] "Request Body" body=""
	I1212 00:30:17.470517  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:17.470833  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:17.970436  525066 type.go:168] "Request Body" body=""
	I1212 00:30:17.970514  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:17.970839  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:17.970896  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:18.470516  525066 type.go:168] "Request Body" body=""
	I1212 00:30:18.470594  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:18.470885  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:18.970641  525066 type.go:168] "Request Body" body=""
	I1212 00:30:18.970763  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:18.971104  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:18.979423  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:30:19.044083  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:30:19.044119  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:30:19.044140  525066 retry.go:31] will retry after 30.537522265s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:30:19.470570  525066 type.go:168] "Request Body" body=""
	I1212 00:30:19.470653  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:19.471007  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:19.971048  525066 type.go:168] "Request Body" body=""
	I1212 00:30:19.971122  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:19.971389  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:19.971439  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:20.470412  525066 type.go:168] "Request Body" body=""
	I1212 00:30:20.470491  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:20.470835  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:20.970464  525066 type.go:168] "Request Body" body=""
	I1212 00:30:20.970544  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:20.970890  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:21.144446  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:30:21.207915  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:30:21.207964  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:30:21.207983  525066 retry.go:31] will retry after 20.295589284s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:30:21.471340  525066 type.go:168] "Request Body" body=""
	I1212 00:30:21.471410  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:21.471696  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:21.970430  525066 type.go:168] "Request Body" body=""
	I1212 00:30:21.970500  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:21.970808  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:22.470556  525066 type.go:168] "Request Body" body=""
	I1212 00:30:22.470633  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:22.470953  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:22.471006  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:22.970442  525066 type.go:168] "Request Body" body=""
	I1212 00:30:22.970508  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:22.970782  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:23.470501  525066 type.go:168] "Request Body" body=""
	I1212 00:30:23.470601  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:23.470922  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:23.970478  525066 type.go:168] "Request Body" body=""
	I1212 00:30:23.970553  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:23.970885  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:24.470551  525066 type.go:168] "Request Body" body=""
	I1212 00:30:24.470618  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:24.470920  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:24.971014  525066 type.go:168] "Request Body" body=""
	I1212 00:30:24.971090  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:24.971391  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:24.971444  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:25.471210  525066 type.go:168] "Request Body" body=""
	I1212 00:30:25.471284  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:25.471604  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:25.971349  525066 type.go:168] "Request Body" body=""
	I1212 00:30:25.971417  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:25.971673  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:26.470375  525066 type.go:168] "Request Body" body=""
	I1212 00:30:26.470450  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:26.470752  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:26.970468  525066 type.go:168] "Request Body" body=""
	I1212 00:30:26.970568  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:26.970900  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:27.470551  525066 type.go:168] "Request Body" body=""
	I1212 00:30:27.470632  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:27.470951  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:27.471009  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:27.970461  525066 type.go:168] "Request Body" body=""
	I1212 00:30:27.970535  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:27.970896  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:28.470447  525066 type.go:168] "Request Body" body=""
	I1212 00:30:28.470517  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:28.470841  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:28.970537  525066 type.go:168] "Request Body" body=""
	I1212 00:30:28.970615  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:28.970891  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:29.470450  525066 type.go:168] "Request Body" body=""
	I1212 00:30:29.470530  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:29.470908  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:29.970898  525066 type.go:168] "Request Body" body=""
	I1212 00:30:29.970970  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:29.971305  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:29.971361  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:30.470857  525066 type.go:168] "Request Body" body=""
	I1212 00:30:30.470924  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:30.471192  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:30.971054  525066 type.go:168] "Request Body" body=""
	I1212 00:30:30.971147  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:30.971476  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:31.471280  525066 type.go:168] "Request Body" body=""
	I1212 00:30:31.471354  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:31.471652  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:31.970396  525066 type.go:168] "Request Body" body=""
	I1212 00:30:31.970469  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:31.970748  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:32.470446  525066 type.go:168] "Request Body" body=""
	I1212 00:30:32.470524  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:32.470875  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:32.470929  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:32.970467  525066 type.go:168] "Request Body" body=""
	I1212 00:30:32.970548  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:32.970910  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:33.470548  525066 type.go:168] "Request Body" body=""
	I1212 00:30:33.470621  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:33.470958  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:33.970663  525066 type.go:168] "Request Body" body=""
	I1212 00:30:33.970768  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:33.971120  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:34.470641  525066 type.go:168] "Request Body" body=""
	I1212 00:30:34.470734  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:34.471055  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:34.471106  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:34.971029  525066 type.go:168] "Request Body" body=""
	I1212 00:30:34.971106  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:34.971362  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:35.471168  525066 type.go:168] "Request Body" body=""
	I1212 00:30:35.471237  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:35.471543  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:35.971213  525066 type.go:168] "Request Body" body=""
	I1212 00:30:35.971284  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:35.971613  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:36.471350  525066 type.go:168] "Request Body" body=""
	I1212 00:30:36.471428  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:36.471693  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:36.471739  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:36.970449  525066 type.go:168] "Request Body" body=""
	I1212 00:30:36.970521  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:36.970836  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:37.470446  525066 type.go:168] "Request Body" body=""
	I1212 00:30:37.470518  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:37.470867  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:37.970336  525066 type.go:168] "Request Body" body=""
	I1212 00:30:37.970408  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:37.970717  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:38.470440  525066 type.go:168] "Request Body" body=""
	I1212 00:30:38.470510  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:38.470841  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:38.970461  525066 type.go:168] "Request Body" body=""
	I1212 00:30:38.970541  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:38.970920  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:38.970990  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:39.470649  525066 type.go:168] "Request Body" body=""
	I1212 00:30:39.470739  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:39.471073  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:39.970912  525066 type.go:168] "Request Body" body=""
	I1212 00:30:39.970992  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:39.971332  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:40.471276  525066 type.go:168] "Request Body" body=""
	I1212 00:30:40.471354  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:40.471676  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:40.970403  525066 type.go:168] "Request Body" body=""
	I1212 00:30:40.970481  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:40.970820  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:41.470514  525066 type.go:168] "Request Body" body=""
	I1212 00:30:41.470595  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:41.470937  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:41.471004  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:41.504392  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:30:41.561180  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:30:41.564784  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:30:41.564819  525066 retry.go:31] will retry after 29.925155821s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:30:41.971369  525066 type.go:168] "Request Body" body=""
	I1212 00:30:41.971443  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:41.971817  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:42.470482  525066 type.go:168] "Request Body" body=""
	I1212 00:30:42.470569  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:42.470884  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:42.970674  525066 type.go:168] "Request Body" body=""
	I1212 00:30:42.970766  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:42.971092  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:43.470816  525066 type.go:168] "Request Body" body=""
	I1212 00:30:43.470887  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:43.471196  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:43.471261  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:43.970994  525066 type.go:168] "Request Body" body=""
	I1212 00:30:43.971095  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:43.971420  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:44.471076  525066 type.go:168] "Request Body" body=""
	I1212 00:30:44.471150  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:44.471470  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:44.971260  525066 type.go:168] "Request Body" body=""
	I1212 00:30:44.971332  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:44.971645  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:45.470349  525066 type.go:168] "Request Body" body=""
	I1212 00:30:45.470425  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:45.470820  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:45.970433  525066 type.go:168] "Request Body" body=""
	I1212 00:30:45.970513  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:45.970829  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:45.970886  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:46.470441  525066 type.go:168] "Request Body" body=""
	I1212 00:30:46.470539  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:46.470878  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:46.970390  525066 type.go:168] "Request Body" body=""
	I1212 00:30:46.970456  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:46.970764  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:47.470436  525066 type.go:168] "Request Body" body=""
	I1212 00:30:47.470515  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:47.470849  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:47.970601  525066 type.go:168] "Request Body" body=""
	I1212 00:30:47.970697  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:47.970992  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:47.971047  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:48.470367  525066 type.go:168] "Request Body" body=""
	I1212 00:30:48.470433  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:48.470671  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:48.970405  525066 type.go:168] "Request Body" body=""
	I1212 00:30:48.970490  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:48.970853  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:49.470483  525066 type.go:168] "Request Body" body=""
	I1212 00:30:49.470560  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:49.470858  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:49.582168  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:30:49.635241  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:30:49.638539  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:30:49.638564  525066 retry.go:31] will retry after 36.706436998s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:30:49.971245  525066 type.go:168] "Request Body" body=""
	I1212 00:30:49.971317  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:49.971579  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:49.971624  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:50.470498  525066 type.go:168] "Request Body" body=""
	I1212 00:30:50.470570  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:50.470924  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:50.970508  525066 type.go:168] "Request Body" body=""
	I1212 00:30:50.970583  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:50.970916  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:51.470404  525066 type.go:168] "Request Body" body=""
	I1212 00:30:51.470472  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:51.470752  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:51.970450  525066 type.go:168] "Request Body" body=""
	I1212 00:30:51.970531  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:51.970886  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:52.470591  525066 type.go:168] "Request Body" body=""
	I1212 00:30:52.470671  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:52.470990  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:52.471051  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:52.970388  525066 type.go:168] "Request Body" body=""
	I1212 00:30:52.970466  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:52.970738  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:53.470464  525066 type.go:168] "Request Body" body=""
	I1212 00:30:53.470534  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:53.470877  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:53.970467  525066 type.go:168] "Request Body" body=""
	I1212 00:30:53.970540  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:53.970875  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:54.470561  525066 type.go:168] "Request Body" body=""
	I1212 00:30:54.470625  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:54.470882  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:54.970766  525066 type.go:168] "Request Body" body=""
	I1212 00:30:54.970842  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:54.971159  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:54.971210  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:55.470744  525066 type.go:168] "Request Body" body=""
	I1212 00:30:55.470816  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:55.471136  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:55.970919  525066 type.go:168] "Request Body" body=""
	I1212 00:30:55.970989  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:55.971245  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:56.471021  525066 type.go:168] "Request Body" body=""
	I1212 00:30:56.471102  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:56.471459  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:56.971297  525066 type.go:168] "Request Body" body=""
	I1212 00:30:56.971380  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:56.971721  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:56.971777  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:57.470392  525066 type.go:168] "Request Body" body=""
	I1212 00:30:57.470466  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:57.470735  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:57.970426  525066 type.go:168] "Request Body" body=""
	I1212 00:30:57.970498  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:57.970846  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:58.470550  525066 type.go:168] "Request Body" body=""
	I1212 00:30:58.470627  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:58.470983  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:58.970661  525066 type.go:168] "Request Body" body=""
	I1212 00:30:58.970747  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:58.971040  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:59.470746  525066 type.go:168] "Request Body" body=""
	I1212 00:30:59.470824  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:59.471166  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:59.471220  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:59.970964  525066 type.go:168] "Request Body" body=""
	I1212 00:30:59.971041  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:59.971352  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:00.470404  525066 type.go:168] "Request Body" body=""
	I1212 00:31:00.470477  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:00.470773  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:00.970466  525066 type.go:168] "Request Body" body=""
	I1212 00:31:00.970543  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:00.970928  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:01.470645  525066 type.go:168] "Request Body" body=""
	I1212 00:31:01.470749  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:01.471096  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:01.970787  525066 type.go:168] "Request Body" body=""
	I1212 00:31:01.970856  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:01.971135  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:01.971178  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:02.470982  525066 type.go:168] "Request Body" body=""
	I1212 00:31:02.471077  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:02.471408  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:02.971193  525066 type.go:168] "Request Body" body=""
	I1212 00:31:02.971269  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:02.971592  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:03.471340  525066 type.go:168] "Request Body" body=""
	I1212 00:31:03.471407  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:03.471649  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:03.970329  525066 type.go:168] "Request Body" body=""
	I1212 00:31:03.970409  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:03.970755  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:04.470469  525066 type.go:168] "Request Body" body=""
	I1212 00:31:04.470553  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:04.470917  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:04.470977  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:04.971081  525066 type.go:168] "Request Body" body=""
	I1212 00:31:04.971152  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:04.971443  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:05.471286  525066 type.go:168] "Request Body" body=""
	I1212 00:31:05.471363  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:05.471677  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:05.970382  525066 type.go:168] "Request Body" body=""
	I1212 00:31:05.970464  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:05.970758  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:06.470420  525066 type.go:168] "Request Body" body=""
	I1212 00:31:06.470484  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:06.470788  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:06.970555  525066 type.go:168] "Request Body" body=""
	I1212 00:31:06.970636  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:06.970974  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:06.971042  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:07.470458  525066 type.go:168] "Request Body" body=""
	I1212 00:31:07.470530  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:07.470902  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:07.970448  525066 type.go:168] "Request Body" body=""
	I1212 00:31:07.970518  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:07.970835  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:08.470450  525066 type.go:168] "Request Body" body=""
	I1212 00:31:08.470521  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:08.470867  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:08.970558  525066 type.go:168] "Request Body" body=""
	I1212 00:31:08.970638  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:08.970976  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:09.470659  525066 type.go:168] "Request Body" body=""
	I1212 00:31:09.470748  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:09.471069  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:09.471163  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:09.971114  525066 type.go:168] "Request Body" body=""
	I1212 00:31:09.971187  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:09.971512  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:10.470533  525066 type.go:168] "Request Body" body=""
	I1212 00:31:10.470613  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:10.470969  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:10.970732  525066 type.go:168] "Request Body" body=""
	I1212 00:31:10.970807  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:10.971084  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:11.470443  525066 type.go:168] "Request Body" body=""
	I1212 00:31:11.470518  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:11.470882  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:11.491140  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:31:11.552135  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:31:11.552186  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:31:11.552275  525066 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1212 00:31:11.970638  525066 type.go:168] "Request Body" body=""
	I1212 00:31:11.970757  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:11.971089  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:11.971151  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:12.470532  525066 type.go:168] "Request Body" body=""
	I1212 00:31:12.470609  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:12.470899  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:12.970411  525066 type.go:168] "Request Body" body=""
	I1212 00:31:12.970504  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:12.970815  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:13.470503  525066 type.go:168] "Request Body" body=""
	I1212 00:31:13.470574  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:13.470924  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:13.970619  525066 type.go:168] "Request Body" body=""
	I1212 00:31:13.970706  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:13.970963  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:14.470738  525066 type.go:168] "Request Body" body=""
	I1212 00:31:14.470812  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:14.471133  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:14.471187  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:14.971146  525066 type.go:168] "Request Body" body=""
	I1212 00:31:14.971222  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:14.971541  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:15.471278  525066 type.go:168] "Request Body" body=""
	I1212 00:31:15.471365  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:15.471609  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:15.970340  525066 type.go:168] "Request Body" body=""
	I1212 00:31:15.970431  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:15.970804  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:16.470509  525066 type.go:168] "Request Body" body=""
	I1212 00:31:16.470591  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:16.470920  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:16.970406  525066 type.go:168] "Request Body" body=""
	I1212 00:31:16.970483  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:16.970790  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:16.970848  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:17.470447  525066 type.go:168] "Request Body" body=""
	I1212 00:31:17.470519  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:17.470842  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:17.970550  525066 type.go:168] "Request Body" body=""
	I1212 00:31:17.970627  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:17.970937  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:18.470378  525066 type.go:168] "Request Body" body=""
	I1212 00:31:18.470445  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:18.470742  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:18.970443  525066 type.go:168] "Request Body" body=""
	I1212 00:31:18.970523  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:18.970864  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:18.970923  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:19.470460  525066 type.go:168] "Request Body" body=""
	I1212 00:31:19.470532  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:19.470860  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:19.970827  525066 type.go:168] "Request Body" body=""
	I1212 00:31:19.970900  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:19.971156  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:20.471114  525066 type.go:168] "Request Body" body=""
	I1212 00:31:20.471192  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:20.471496  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:20.971299  525066 type.go:168] "Request Body" body=""
	I1212 00:31:20.971376  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:20.971723  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:20.971777  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:21.471362  525066 type.go:168] "Request Body" body=""
	I1212 00:31:21.471430  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:21.471729  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:21.970437  525066 type.go:168] "Request Body" body=""
	I1212 00:31:21.970510  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:21.970868  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:22.470577  525066 type.go:168] "Request Body" body=""
	I1212 00:31:22.470650  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:22.470985  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:22.970698  525066 type.go:168] "Request Body" body=""
	I1212 00:31:22.970765  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:22.971007  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:23.470436  525066 type.go:168] "Request Body" body=""
	I1212 00:31:23.470511  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:23.470861  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:23.470914  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:23.970561  525066 type.go:168] "Request Body" body=""
	I1212 00:31:23.970643  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:23.970973  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:24.470353  525066 type.go:168] "Request Body" body=""
	I1212 00:31:24.470466  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:24.470739  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:24.970663  525066 type.go:168] "Request Body" body=""
	I1212 00:31:24.970762  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:24.971091  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:25.470468  525066 type.go:168] "Request Body" body=""
	I1212 00:31:25.470542  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:25.470865  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:25.970393  525066 type.go:168] "Request Body" body=""
	I1212 00:31:25.970464  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:25.970807  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:25.970864  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:26.345425  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:31:26.402811  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:31:26.406955  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:31:26.407059  525066 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1212 00:31:26.410095  525066 out.go:179] * Enabled addons: 
	I1212 00:31:26.413891  525066 addons.go:530] duration metric: took 1m46.38627975s for enable addons: enabled=[]
	I1212 00:31:26.471160  525066 type.go:168] "Request Body" body=""
	I1212 00:31:26.471237  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:26.471562  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:26.971357  525066 type.go:168] "Request Body" body=""
	I1212 00:31:26.971432  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:26.971737  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:27.470432  525066 type.go:168] "Request Body" body=""
	I1212 00:31:27.470500  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:27.470799  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:27.970424  525066 type.go:168] "Request Body" body=""
	I1212 00:31:27.970502  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:27.970862  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:27.970917  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:28.470589  525066 type.go:168] "Request Body" body=""
	I1212 00:31:28.470667  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:28.470983  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:28.970654  525066 type.go:168] "Request Body" body=""
	I1212 00:31:28.970741  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:28.970990  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:29.470447  525066 type.go:168] "Request Body" body=""
	I1212 00:31:29.470518  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:29.470827  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:29.970750  525066 type.go:168] "Request Body" body=""
	I1212 00:31:29.970827  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:29.971160  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:29.971218  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:30.471043  525066 type.go:168] "Request Body" body=""
	I1212 00:31:30.471109  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:30.471376  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:30.971171  525066 type.go:168] "Request Body" body=""
	I1212 00:31:30.971241  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:30.971550  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:31.471358  525066 type.go:168] "Request Body" body=""
	I1212 00:31:31.471445  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:31.471839  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:31.970419  525066 type.go:168] "Request Body" body=""
	I1212 00:31:31.970484  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:31.970752  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:32.470476  525066 type.go:168] "Request Body" body=""
	I1212 00:31:32.470545  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:32.470896  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:32.470960  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:32.970646  525066 type.go:168] "Request Body" body=""
	I1212 00:31:32.970737  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:32.971068  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:33.470394  525066 type.go:168] "Request Body" body=""
	I1212 00:31:33.470464  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:33.470730  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:33.970441  525066 type.go:168] "Request Body" body=""
	I1212 00:31:33.970512  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:33.970887  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:34.470452  525066 type.go:168] "Request Body" body=""
	I1212 00:31:34.470528  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:34.471050  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:34.471101  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:34.971076  525066 type.go:168] "Request Body" body=""
	I1212 00:31:34.971150  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:34.971412  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:35.471345  525066 type.go:168] "Request Body" body=""
	I1212 00:31:35.471417  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:35.471701  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:35.970416  525066 type.go:168] "Request Body" body=""
	I1212 00:31:35.970491  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:35.970794  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:36.470413  525066 type.go:168] "Request Body" body=""
	I1212 00:31:36.470483  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:36.470801  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:36.970488  525066 type.go:168] "Request Body" body=""
	I1212 00:31:36.970578  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:36.970944  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:36.970998  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:37.470498  525066 type.go:168] "Request Body" body=""
	I1212 00:31:37.470572  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:37.470867  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:37.970414  525066 type.go:168] "Request Body" body=""
	I1212 00:31:37.970484  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:37.970840  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:38.470560  525066 type.go:168] "Request Body" body=""
	I1212 00:31:38.470640  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:38.470997  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:38.970458  525066 type.go:168] "Request Body" body=""
	I1212 00:31:38.970542  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:38.970875  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:39.470418  525066 type.go:168] "Request Body" body=""
	I1212 00:31:39.470483  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:39.470746  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:39.470792  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:39.970766  525066 type.go:168] "Request Body" body=""
	I1212 00:31:39.970840  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:39.971186  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:40.470676  525066 type.go:168] "Request Body" body=""
	I1212 00:31:40.470773  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:40.471187  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:40.970410  525066 type.go:168] "Request Body" body=""
	I1212 00:31:40.970498  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:40.970933  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:41.470798  525066 type.go:168] "Request Body" body=""
	I1212 00:31:41.470881  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:41.471270  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:41.471324  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:41.971113  525066 type.go:168] "Request Body" body=""
	I1212 00:31:41.971189  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:41.971540  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:42.471293  525066 type.go:168] "Request Body" body=""
	I1212 00:31:42.471364  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:42.471623  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:42.971377  525066 type.go:168] "Request Body" body=""
	I1212 00:31:42.971447  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:42.971777  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:43.470484  525066 type.go:168] "Request Body" body=""
	I1212 00:31:43.470559  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:43.470898  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:43.970565  525066 type.go:168] "Request Body" body=""
	I1212 00:31:43.970635  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:43.970946  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:43.970995  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:44.470714  525066 type.go:168] "Request Body" body=""
	I1212 00:31:44.470786  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:44.471067  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:44.970961  525066 type.go:168] "Request Body" body=""
	I1212 00:31:44.971037  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:44.971349  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:45.471086  525066 type.go:168] "Request Body" body=""
	I1212 00:31:45.471160  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:45.471425  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:45.971255  525066 type.go:168] "Request Body" body=""
	I1212 00:31:45.971369  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:45.971732  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:45.971788  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:46.470449  525066 type.go:168] "Request Body" body=""
	I1212 00:31:46.470522  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:46.470852  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:46.970385  525066 type.go:168] "Request Body" body=""
	I1212 00:31:46.970452  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:46.970722  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:47.470437  525066 type.go:168] "Request Body" body=""
	I1212 00:31:47.470516  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:47.471054  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:47.970787  525066 type.go:168] "Request Body" body=""
	I1212 00:31:47.970859  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:47.971237  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:48.470392  525066 type.go:168] "Request Body" body=""
	I1212 00:31:48.470462  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:48.470738  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:48.470782  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:48.970447  525066 type.go:168] "Request Body" body=""
	I1212 00:31:48.970518  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:48.970858  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:49.470546  525066 type.go:168] "Request Body" body=""
	I1212 00:31:49.470624  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:49.470988  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:49.970873  525066 type.go:168] "Request Body" body=""
	I1212 00:31:49.970945  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:49.971253  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:50.471037  525066 type.go:168] "Request Body" body=""
	I1212 00:31:50.471108  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:50.471396  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:50.471445  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:50.971208  525066 type.go:168] "Request Body" body=""
	I1212 00:31:50.971282  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:50.971603  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:51.471213  525066 type.go:168] "Request Body" body=""
	I1212 00:31:51.471279  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:51.471540  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:51.971366  525066 type.go:168] "Request Body" body=""
	I1212 00:31:51.971439  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:51.971745  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:52.470473  525066 type.go:168] "Request Body" body=""
	I1212 00:31:52.470565  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:52.470989  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:52.970405  525066 type.go:168] "Request Body" body=""
	I1212 00:31:52.970478  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:52.970761  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:52.970811  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:53.470446  525066 type.go:168] "Request Body" body=""
	I1212 00:31:53.470521  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:53.470872  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:53.970497  525066 type.go:168] "Request Body" body=""
	I1212 00:31:53.970576  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:53.970925  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:54.470437  525066 type.go:168] "Request Body" body=""
	I1212 00:31:54.470505  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:54.470810  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:54.970825  525066 type.go:168] "Request Body" body=""
	I1212 00:31:54.970901  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:54.971247  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:54.971305  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:55.471027  525066 type.go:168] "Request Body" body=""
	I1212 00:31:55.471109  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:55.471438  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:55.971085  525066 type.go:168] "Request Body" body=""
	I1212 00:31:55.971149  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:55.971395  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:56.471224  525066 type.go:168] "Request Body" body=""
	I1212 00:31:56.471307  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:56.471633  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:56.970393  525066 type.go:168] "Request Body" body=""
	I1212 00:31:56.970474  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:56.970791  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:57.470420  525066 type.go:168] "Request Body" body=""
	I1212 00:31:57.470487  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:57.470757  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:57.470801  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:57.970446  525066 type.go:168] "Request Body" body=""
	I1212 00:31:57.970526  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:57.970833  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:58.470459  525066 type.go:168] "Request Body" body=""
	I1212 00:31:58.470532  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:58.470885  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:58.970572  525066 type.go:168] "Request Body" body=""
	I1212 00:31:58.970646  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:58.970920  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:59.470458  525066 type.go:168] "Request Body" body=""
	I1212 00:31:59.470548  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:59.470889  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:59.470948  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:59.970930  525066 type.go:168] "Request Body" body=""
	I1212 00:31:59.971026  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:59.971389  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:00.470941  525066 type.go:168] "Request Body" body=""
	I1212 00:32:00.471069  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:00.471359  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:00.971229  525066 type.go:168] "Request Body" body=""
	I1212 00:32:00.971307  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:00.971647  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:01.470368  525066 type.go:168] "Request Body" body=""
	I1212 00:32:01.470445  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:01.470795  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:01.970448  525066 type.go:168] "Request Body" body=""
	I1212 00:32:01.970521  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:01.970816  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:01.970862  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:02.470549  525066 type.go:168] "Request Body" body=""
	I1212 00:32:02.470624  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:02.470998  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:02.970773  525066 type.go:168] "Request Body" body=""
	I1212 00:32:02.970858  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:02.971312  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:03.471100  525066 type.go:168] "Request Body" body=""
	I1212 00:32:03.471172  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:03.471473  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:03.971278  525066 type.go:168] "Request Body" body=""
	I1212 00:32:03.971356  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:03.971686  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:03.971737  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:04.470407  525066 type.go:168] "Request Body" body=""
	I1212 00:32:04.470491  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:04.470843  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:04.970701  525066 type.go:168] "Request Body" body=""
	I1212 00:32:04.970771  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:04.971049  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:05.470747  525066 type.go:168] "Request Body" body=""
	I1212 00:32:05.470824  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:05.471189  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:05.970759  525066 type.go:168] "Request Body" body=""
	I1212 00:32:05.970838  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:05.971177  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:06.470915  525066 type.go:168] "Request Body" body=""
	I1212 00:32:06.470997  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:06.471253  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:06.471294  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:06.971057  525066 type.go:168] "Request Body" body=""
	I1212 00:32:06.971134  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:06.971488  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:07.471269  525066 type.go:168] "Request Body" body=""
	I1212 00:32:07.471344  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:07.471670  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:07.970352  525066 type.go:168] "Request Body" body=""
	I1212 00:32:07.970421  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:07.970747  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:08.470438  525066 type.go:168] "Request Body" body=""
	I1212 00:32:08.470509  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:08.470878  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:08.970449  525066 type.go:168] "Request Body" body=""
	I1212 00:32:08.970521  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:08.970871  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:08.970925  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:09.470391  525066 type.go:168] "Request Body" body=""
	I1212 00:32:09.470470  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:09.470756  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:09.970703  525066 type.go:168] "Request Body" body=""
	I1212 00:32:09.970779  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:09.971116  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:10.471033  525066 type.go:168] "Request Body" body=""
	I1212 00:32:10.471109  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:10.471417  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:10.971169  525066 type.go:168] "Request Body" body=""
	I1212 00:32:10.971238  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:10.971496  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:10.971539  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:11.471372  525066 type.go:168] "Request Body" body=""
	I1212 00:32:11.471451  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:11.471770  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:11.970469  525066 type.go:168] "Request Body" body=""
	I1212 00:32:11.970586  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:11.970898  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:12.470383  525066 type.go:168] "Request Body" body=""
	I1212 00:32:12.470453  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:12.470788  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:12.970473  525066 type.go:168] "Request Body" body=""
	I1212 00:32:12.970545  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:12.970889  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:13.470464  525066 type.go:168] "Request Body" body=""
	I1212 00:32:13.470555  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:13.470934  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:13.470994  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:13.970655  525066 type.go:168] "Request Body" body=""
	I1212 00:32:13.970754  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:13.971092  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:14.470457  525066 type.go:168] "Request Body" body=""
	I1212 00:32:14.470538  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:14.470903  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:14.970794  525066 type.go:168] "Request Body" body=""
	I1212 00:32:14.970878  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:14.971205  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:15.470971  525066 type.go:168] "Request Body" body=""
	I1212 00:32:15.471055  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:15.471372  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:15.471414  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:15.971237  525066 type.go:168] "Request Body" body=""
	I1212 00:32:15.971320  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:15.971640  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:16.470370  525066 type.go:168] "Request Body" body=""
	I1212 00:32:16.470445  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:16.470782  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:16.970410  525066 type.go:168] "Request Body" body=""
	I1212 00:32:16.970484  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:16.970823  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:17.470439  525066 type.go:168] "Request Body" body=""
	I1212 00:32:17.470517  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:17.470864  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:17.970590  525066 type.go:168] "Request Body" body=""
	I1212 00:32:17.970664  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:17.971024  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:17.971078  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:18.470738  525066 type.go:168] "Request Body" body=""
	I1212 00:32:18.470805  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:18.471184  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:18.971020  525066 type.go:168] "Request Body" body=""
	I1212 00:32:18.971105  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:18.971458  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:19.471116  525066 type.go:168] "Request Body" body=""
	I1212 00:32:19.471188  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:19.471515  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:19.970337  525066 type.go:168] "Request Body" body=""
	I1212 00:32:19.970412  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:19.970828  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:20.471213  525066 type.go:168] "Request Body" body=""
	I1212 00:32:20.471293  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:20.471629  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:20.471692  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:20.970398  525066 type.go:168] "Request Body" body=""
	I1212 00:32:20.970472  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:20.970846  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:21.470529  525066 type.go:168] "Request Body" body=""
	I1212 00:32:21.470596  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:21.470869  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:21.970458  525066 type.go:168] "Request Body" body=""
	I1212 00:32:21.970531  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:21.970901  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:22.470489  525066 type.go:168] "Request Body" body=""
	I1212 00:32:22.470578  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:22.470923  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:22.970628  525066 type.go:168] "Request Body" body=""
	I1212 00:32:22.970719  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:22.970989  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:22.971032  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:23.470448  525066 type.go:168] "Request Body" body=""
	I1212 00:32:23.470535  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:23.470894  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:23.970458  525066 type.go:168] "Request Body" body=""
	I1212 00:32:23.970535  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:23.970896  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:24.470332  525066 type.go:168] "Request Body" body=""
	I1212 00:32:24.470407  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:24.470716  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:24.970747  525066 type.go:168] "Request Body" body=""
	I1212 00:32:24.970820  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:24.971165  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:24.971223  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:25.471031  525066 type.go:168] "Request Body" body=""
	I1212 00:32:25.471128  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:25.471490  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:25.971208  525066 type.go:168] "Request Body" body=""
	I1212 00:32:25.971275  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:25.971541  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:26.471290  525066 type.go:168] "Request Body" body=""
	I1212 00:32:26.471368  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:26.471700  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:26.970438  525066 type.go:168] "Request Body" body=""
	I1212 00:32:26.970517  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:26.970866  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:27.470424  525066 type.go:168] "Request Body" body=""
	I1212 00:32:27.470499  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:27.470866  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:27.470914  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:27.970455  525066 type.go:168] "Request Body" body=""
	I1212 00:32:27.970540  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:27.970913  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:28.470649  525066 type.go:168] "Request Body" body=""
	I1212 00:32:28.470738  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:28.471036  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:28.970429  525066 type.go:168] "Request Body" body=""
	I1212 00:32:28.970500  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:28.970820  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:29.470446  525066 type.go:168] "Request Body" body=""
	I1212 00:32:29.470523  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:29.470857  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:29.970808  525066 type.go:168] "Request Body" body=""
	I1212 00:32:29.970884  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:29.971262  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:29.971318  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:30.471202  525066 type.go:168] "Request Body" body=""
	I1212 00:32:30.471270  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:30.471570  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:30.971249  525066 type.go:168] "Request Body" body=""
	I1212 00:32:30.971333  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:30.971675  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:31.470379  525066 type.go:168] "Request Body" body=""
	I1212 00:32:31.470456  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:31.470795  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:31.970429  525066 type.go:168] "Request Body" body=""
	I1212 00:32:31.970500  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:31.970830  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:32.470463  525066 type.go:168] "Request Body" body=""
	I1212 00:32:32.470546  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:32.470916  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:32.470969  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:32.970507  525066 type.go:168] "Request Body" body=""
	I1212 00:32:32.970586  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:32.970962  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:33.470645  525066 type.go:168] "Request Body" body=""
	I1212 00:32:33.470725  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:33.471027  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:33.970436  525066 type.go:168] "Request Body" body=""
	I1212 00:32:33.970515  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:33.970871  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:34.470554  525066 type.go:168] "Request Body" body=""
	I1212 00:32:34.470623  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:34.470962  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:34.471031  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:34.970790  525066 type.go:168] "Request Body" body=""
	I1212 00:32:34.970868  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:34.971195  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:35.470980  525066 type.go:168] "Request Body" body=""
	I1212 00:32:35.471052  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:35.471397  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:35.971176  525066 type.go:168] "Request Body" body=""
	I1212 00:32:35.971249  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:35.971579  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:36.471345  525066 type.go:168] "Request Body" body=""
	I1212 00:32:36.471420  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:36.471668  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:36.471709  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:36.970372  525066 type.go:168] "Request Body" body=""
	I1212 00:32:36.970448  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:36.970769  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:37.470433  525066 type.go:168] "Request Body" body=""
	I1212 00:32:37.470509  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:37.470827  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:37.970393  525066 type.go:168] "Request Body" body=""
	I1212 00:32:37.970466  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:37.970813  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:38.470425  525066 type.go:168] "Request Body" body=""
	I1212 00:32:38.470496  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:38.470842  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:38.970540  525066 type.go:168] "Request Body" body=""
	I1212 00:32:38.970620  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:38.970960  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:38.971034  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:39.470693  525066 type.go:168] "Request Body" body=""
	I1212 00:32:39.470760  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:39.471016  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:39.970946  525066 type.go:168] "Request Body" body=""
	I1212 00:32:39.971029  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:39.971356  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:40.470492  525066 type.go:168] "Request Body" body=""
	I1212 00:32:40.470569  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:40.470930  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:40.970610  525066 type.go:168] "Request Body" body=""
	I1212 00:32:40.970675  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:40.970950  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:41.470442  525066 type.go:168] "Request Body" body=""
	I1212 00:32:41.470517  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:41.470854  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:41.470914  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:41.970457  525066 type.go:168] "Request Body" body=""
	I1212 00:32:41.970529  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:41.970877  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:42.470405  525066 type.go:168] "Request Body" body=""
	I1212 00:32:42.470478  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:42.470764  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:42.970423  525066 type.go:168] "Request Body" body=""
	I1212 00:32:42.970491  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:42.970824  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:43.470427  525066 type.go:168] "Request Body" body=""
	I1212 00:32:43.470499  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:43.470854  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:43.970397  525066 type.go:168] "Request Body" body=""
	I1212 00:32:43.970461  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:43.970746  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:43.970790  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:44.470423  525066 type.go:168] "Request Body" body=""
	I1212 00:32:44.470501  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:44.470842  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:44.970773  525066 type.go:168] "Request Body" body=""
	I1212 00:32:44.970846  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:44.971174  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:45.470779  525066 type.go:168] "Request Body" body=""
	I1212 00:32:45.470852  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:45.471113  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:45.970429  525066 type.go:168] "Request Body" body=""
	I1212 00:32:45.970512  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:45.970857  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:45.970913  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:46.470599  525066 type.go:168] "Request Body" body=""
	I1212 00:32:46.470698  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:46.471036  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:46.970724  525066 type.go:168] "Request Body" body=""
	I1212 00:32:46.970811  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:46.971101  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:47.470415  525066 type.go:168] "Request Body" body=""
	I1212 00:32:47.470487  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:47.470829  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:47.970458  525066 type.go:168] "Request Body" body=""
	I1212 00:32:47.970529  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:47.970877  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:47.970934  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:48.470399  525066 type.go:168] "Request Body" body=""
	I1212 00:32:48.470468  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:48.470756  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:48.970465  525066 type.go:168] "Request Body" body=""
	I1212 00:32:48.970543  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:48.970922  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:49.470651  525066 type.go:168] "Request Body" body=""
	I1212 00:32:49.470742  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:49.471098  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:49.970876  525066 type.go:168] "Request Body" body=""
	I1212 00:32:49.970959  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:49.971229  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:49.971270  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:50.471244  525066 type.go:168] "Request Body" body=""
	I1212 00:32:50.471322  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:50.471670  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:50.970390  525066 type.go:168] "Request Body" body=""
	I1212 00:32:50.970471  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:50.970817  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:51.470505  525066 type.go:168] "Request Body" body=""
	I1212 00:32:51.470577  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:51.470954  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:51.970430  525066 type.go:168] "Request Body" body=""
	I1212 00:32:51.970500  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:51.970854  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:52.470564  525066 type.go:168] "Request Body" body=""
	I1212 00:32:52.470637  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:52.471003  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:52.471056  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:52.970380  525066 type.go:168] "Request Body" body=""
	I1212 00:32:52.970448  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:52.970779  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:53.470481  525066 type.go:168] "Request Body" body=""
	I1212 00:32:53.470554  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:53.470926  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:53.970647  525066 type.go:168] "Request Body" body=""
	I1212 00:32:53.970737  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:53.971090  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:54.471319  525066 type.go:168] "Request Body" body=""
	I1212 00:32:54.471392  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:54.471642  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:54.471682  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:54.970626  525066 type.go:168] "Request Body" body=""
	I1212 00:32:54.970705  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:54.971020  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:55.470451  525066 type.go:168] "Request Body" body=""
	I1212 00:32:55.470522  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:55.470854  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:55.970384  525066 type.go:168] "Request Body" body=""
	I1212 00:32:55.970461  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:55.970755  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:56.470437  525066 type.go:168] "Request Body" body=""
	I1212 00:32:56.470521  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:56.470867  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:56.970577  525066 type.go:168] "Request Body" body=""
	I1212 00:32:56.970650  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:56.971023  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:56.971077  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:57.470742  525066 type.go:168] "Request Body" body=""
	I1212 00:32:57.470815  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:57.471167  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:57.970872  525066 type.go:168] "Request Body" body=""
	I1212 00:32:57.970953  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:57.971280  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:58.471062  525066 type.go:168] "Request Body" body=""
	I1212 00:32:58.471134  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:58.471462  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:58.971255  525066 type.go:168] "Request Body" body=""
	I1212 00:32:58.971322  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:58.971578  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:58.971620  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:59.471341  525066 type.go:168] "Request Body" body=""
	I1212 00:32:59.471410  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:59.471735  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:59.970614  525066 type.go:168] "Request Body" body=""
	I1212 00:32:59.970716  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:59.971048  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:00.470331  525066 type.go:168] "Request Body" body=""
	I1212 00:33:00.470413  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:00.470671  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:00.970395  525066 type.go:168] "Request Body" body=""
	I1212 00:33:00.970485  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:00.970885  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:01.470464  525066 type.go:168] "Request Body" body=""
	I1212 00:33:01.470537  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:01.470879  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:01.470943  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:01.970438  525066 type.go:168] "Request Body" body=""
	I1212 00:33:01.970506  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:01.970852  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:02.470619  525066 type.go:168] "Request Body" body=""
	I1212 00:33:02.470714  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:02.471075  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:02.970791  525066 type.go:168] "Request Body" body=""
	I1212 00:33:02.970863  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:02.971208  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:03.470951  525066 type.go:168] "Request Body" body=""
	I1212 00:33:03.471027  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:03.471358  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:03.471426  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:03.971137  525066 type.go:168] "Request Body" body=""
	I1212 00:33:03.971208  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:03.971542  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:04.471345  525066 type.go:168] "Request Body" body=""
	I1212 00:33:04.471415  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:04.471746  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:04.970402  525066 type.go:168] "Request Body" body=""
	I1212 00:33:04.970479  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:04.970766  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:05.470439  525066 type.go:168] "Request Body" body=""
	I1212 00:33:05.470511  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:05.470849  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:05.970564  525066 type.go:168] "Request Body" body=""
	I1212 00:33:05.970637  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:05.970984  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:05.971040  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:06.470424  525066 type.go:168] "Request Body" body=""
	I1212 00:33:06.470502  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:06.470781  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:06.970465  525066 type.go:168] "Request Body" body=""
	I1212 00:33:06.970547  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:06.970898  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:07.470461  525066 type.go:168] "Request Body" body=""
	I1212 00:33:07.470546  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:07.470897  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:07.970572  525066 type.go:168] "Request Body" body=""
	I1212 00:33:07.970648  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:07.970982  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:08.470661  525066 type.go:168] "Request Body" body=""
	I1212 00:33:08.470757  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:08.471101  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:08.471155  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:08.970834  525066 type.go:168] "Request Body" body=""
	I1212 00:33:08.970915  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:08.971261  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:09.471007  525066 type.go:168] "Request Body" body=""
	I1212 00:33:09.471080  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:09.471383  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:09.971222  525066 type.go:168] "Request Body" body=""
	I1212 00:33:09.971292  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:09.971613  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:10.470464  525066 type.go:168] "Request Body" body=""
	I1212 00:33:10.470536  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:10.470871  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:10.970418  525066 type.go:168] "Request Body" body=""
	I1212 00:33:10.970483  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:10.970802  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:10.970858  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:11.470473  525066 type.go:168] "Request Body" body=""
	I1212 00:33:11.470545  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:11.470900  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:11.970438  525066 type.go:168] "Request Body" body=""
	I1212 00:33:11.970517  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:11.970868  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:12.470560  525066 type.go:168] "Request Body" body=""
	I1212 00:33:12.470632  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:12.470928  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:12.970459  525066 type.go:168] "Request Body" body=""
	I1212 00:33:12.970538  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:12.970885  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:12.970943  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:13.470422  525066 type.go:168] "Request Body" body=""
	I1212 00:33:13.470501  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:13.470840  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:13.970429  525066 type.go:168] "Request Body" body=""
	I1212 00:33:13.970498  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:13.970796  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:14.470442  525066 type.go:168] "Request Body" body=""
	I1212 00:33:14.470515  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:14.470869  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:14.970766  525066 type.go:168] "Request Body" body=""
	I1212 00:33:14.970842  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:14.971184  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:14.971238  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:15.470940  525066 type.go:168] "Request Body" body=""
	I1212 00:33:15.471011  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:15.471271  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:15.971043  525066 type.go:168] "Request Body" body=""
	I1212 00:33:15.971115  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:15.971480  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:16.471281  525066 type.go:168] "Request Body" body=""
	I1212 00:33:16.471357  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:16.471735  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:16.970410  525066 type.go:168] "Request Body" body=""
	I1212 00:33:16.970477  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:16.970775  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:17.470446  525066 type.go:168] "Request Body" body=""
	I1212 00:33:17.470524  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:17.470842  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:17.470887  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:17.970551  525066 type.go:168] "Request Body" body=""
	I1212 00:33:17.970623  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:17.970977  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:18.470481  525066 type.go:168] "Request Body" body=""
	I1212 00:33:18.470557  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:18.470882  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:18.970446  525066 type.go:168] "Request Body" body=""
	I1212 00:33:18.970516  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:18.970881  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:19.470588  525066 type.go:168] "Request Body" body=""
	I1212 00:33:19.470670  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:19.471039  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:19.471096  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:19.970916  525066 type.go:168] "Request Body" body=""
	I1212 00:33:19.971010  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:19.971330  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:20.471261  525066 type.go:168] "Request Body" body=""
	I1212 00:33:20.471340  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:20.471686  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:20.970404  525066 type.go:168] "Request Body" body=""
	I1212 00:33:20.970481  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:20.970812  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:21.470406  525066 type.go:168] "Request Body" body=""
	I1212 00:33:21.470472  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:21.470744  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:21.970441  525066 type.go:168] "Request Body" body=""
	I1212 00:33:21.970514  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:21.970842  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:21.970902  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:22.470458  525066 type.go:168] "Request Body" body=""
	I1212 00:33:22.470541  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:22.470874  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:22.970430  525066 type.go:168] "Request Body" body=""
	I1212 00:33:22.970499  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:22.970829  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:23.470435  525066 type.go:168] "Request Body" body=""
	I1212 00:33:23.470504  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:23.470830  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:23.970480  525066 type.go:168] "Request Body" body=""
	I1212 00:33:23.970555  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:23.970905  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:23.970965  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:24.470433  525066 type.go:168] "Request Body" body=""
	I1212 00:33:24.470506  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:24.470783  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:24.970803  525066 type.go:168] "Request Body" body=""
	I1212 00:33:24.970886  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:24.971251  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:25.471119  525066 type.go:168] "Request Body" body=""
	I1212 00:33:25.471192  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:25.471497  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:25.971209  525066 type.go:168] "Request Body" body=""
	I1212 00:33:25.971285  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:25.971580  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:25.971621  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:26.470369  525066 type.go:168] "Request Body" body=""
	I1212 00:33:26.470445  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:26.470776  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:26.970409  525066 type.go:168] "Request Body" body=""
	I1212 00:33:26.970481  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:26.970791  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:27.470405  525066 type.go:168] "Request Body" body=""
	I1212 00:33:27.470473  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:27.470753  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:27.970463  525066 type.go:168] "Request Body" body=""
	I1212 00:33:27.970537  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:27.970900  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:28.470443  525066 type.go:168] "Request Body" body=""
	I1212 00:33:28.470515  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:28.470860  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:28.470912  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:28.970541  525066 type.go:168] "Request Body" body=""
	I1212 00:33:28.970608  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:28.970887  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:29.470489  525066 type.go:168] "Request Body" body=""
	I1212 00:33:29.470563  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:29.470897  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:29.970848  525066 type.go:168] "Request Body" body=""
	I1212 00:33:29.970931  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:29.971286  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:30.470592  525066 type.go:168] "Request Body" body=""
	I1212 00:33:30.470659  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:30.470963  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:30.471010  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:30.970450  525066 type.go:168] "Request Body" body=""
	I1212 00:33:30.970524  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:30.970868  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:31.470588  525066 type.go:168] "Request Body" body=""
	I1212 00:33:31.470666  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:31.471003  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:31.970391  525066 type.go:168] "Request Body" body=""
	I1212 00:33:31.970461  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:31.970788  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:32.470477  525066 type.go:168] "Request Body" body=""
	I1212 00:33:32.470550  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:32.470900  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:32.970483  525066 type.go:168] "Request Body" body=""
	I1212 00:33:32.970557  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:32.970910  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:32.970970  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:33.470618  525066 type.go:168] "Request Body" body=""
	I1212 00:33:33.470712  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:33.470974  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:33.970444  525066 type.go:168] "Request Body" body=""
	I1212 00:33:33.970517  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:33.970882  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:34.470466  525066 type.go:168] "Request Body" body=""
	I1212 00:33:34.470544  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:34.470888  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:34.974811  525066 type.go:168] "Request Body" body=""
	I1212 00:33:34.974888  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:34.975210  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:34.975263  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:35.470393  525066 type.go:168] "Request Body" body=""
	I1212 00:33:35.470475  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:35.470774  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:35.970447  525066 type.go:168] "Request Body" body=""
	I1212 00:33:35.970520  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:35.970921  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:36.470630  525066 type.go:168] "Request Body" body=""
	I1212 00:33:36.470714  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:36.470977  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:36.970447  525066 type.go:168] "Request Body" body=""
	I1212 00:33:36.970519  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:36.970841  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:37.470461  525066 type.go:168] "Request Body" body=""
	I1212 00:33:37.470545  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:37.470921  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:37.470982  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:37.970408  525066 type.go:168] "Request Body" body=""
	I1212 00:33:37.970475  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:37.970748  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:38.470525  525066 type.go:168] "Request Body" body=""
	I1212 00:33:38.470598  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:38.470966  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:38.970662  525066 type.go:168] "Request Body" body=""
	I1212 00:33:38.970757  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:38.971088  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:39.470794  525066 type.go:168] "Request Body" body=""
	I1212 00:33:39.470866  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:39.471135  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:39.471185  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:39.971164  525066 type.go:168] "Request Body" body=""
	I1212 00:33:39.971246  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:39.971584  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:40.470516  525066 type.go:168] "Request Body" body=""
	I1212 00:33:40.470591  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:40.470924  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:40.970415  525066 type.go:168] "Request Body" body=""
	I1212 00:33:40.970485  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:40.971060  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:41.470451  525066 type.go:168] "Request Body" body=""
	I1212 00:33:41.470587  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:41.470946  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:41.970903  525066 type.go:168] "Request Body" body=""
	I1212 00:33:41.970980  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:41.971291  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:41.971344  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:42.471094  525066 type.go:168] "Request Body" body=""
	I1212 00:33:42.471178  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:42.471572  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:42.971249  525066 type.go:168] "Request Body" body=""
	I1212 00:33:42.971320  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:42.971651  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:43.470400  525066 type.go:168] "Request Body" body=""
	I1212 00:33:43.470476  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:43.470790  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:43.970434  525066 type.go:168] "Request Body" body=""
	I1212 00:33:43.970503  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:43.970849  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:44.470442  525066 type.go:168] "Request Body" body=""
	I1212 00:33:44.470512  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:44.470828  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:44.470882  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:44.970788  525066 type.go:168] "Request Body" body=""
	I1212 00:33:44.970861  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:44.971179  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:45.471027  525066 type.go:168] "Request Body" body=""
	I1212 00:33:45.471095  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:45.471384  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:45.971175  525066 type.go:168] "Request Body" body=""
	I1212 00:33:45.971254  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:45.971602  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:46.471390  525066 type.go:168] "Request Body" body=""
	I1212 00:33:46.471469  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:46.471785  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:46.471843  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:46.970475  525066 type.go:168] "Request Body" body=""
	I1212 00:33:46.970546  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:46.970906  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:47.470431  525066 type.go:168] "Request Body" body=""
	I1212 00:33:47.470507  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:47.470868  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:47.970663  525066 type.go:168] "Request Body" body=""
	I1212 00:33:47.970759  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:47.971097  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:48.470801  525066 type.go:168] "Request Body" body=""
	I1212 00:33:48.470875  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:48.471136  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:48.970447  525066 type.go:168] "Request Body" body=""
	I1212 00:33:48.970518  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:48.970883  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:48.970943  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:49.470484  525066 type.go:168] "Request Body" body=""
	I1212 00:33:49.470558  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:49.470901  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:49.970711  525066 type.go:168] "Request Body" body=""
	I1212 00:33:49.970783  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:49.971100  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:50.471121  525066 type.go:168] "Request Body" body=""
	I1212 00:33:50.471191  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:50.471492  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:50.971285  525066 type.go:168] "Request Body" body=""
	I1212 00:33:50.971368  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:50.971704  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:50.971758  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:51.470388  525066 type.go:168] "Request Body" body=""
	I1212 00:33:51.470461  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:51.470790  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:51.970459  525066 type.go:168] "Request Body" body=""
	I1212 00:33:51.970536  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:51.970878  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:52.470599  525066 type.go:168] "Request Body" body=""
	I1212 00:33:52.470668  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:52.471040  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:52.970519  525066 type.go:168] "Request Body" body=""
	I1212 00:33:52.970595  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:52.970943  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:53.470479  525066 type.go:168] "Request Body" body=""
	I1212 00:33:53.470557  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:53.470898  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:53.470998  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:53.970658  525066 type.go:168] "Request Body" body=""
	I1212 00:33:53.970752  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:53.971087  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:54.470382  525066 type.go:168] "Request Body" body=""
	I1212 00:33:54.470475  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:54.470745  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:54.971392  525066 type.go:168] "Request Body" body=""
	I1212 00:33:54.971460  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:54.971785  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:55.470484  525066 type.go:168] "Request Body" body=""
	I1212 00:33:55.470559  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:55.470901  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:55.970438  525066 type.go:168] "Request Body" body=""
	I1212 00:33:55.970506  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:55.970773  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:55.970819  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:56.470532  525066 type.go:168] "Request Body" body=""
	I1212 00:33:56.470620  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:56.470993  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:56.970746  525066 type.go:168] "Request Body" body=""
	I1212 00:33:56.970823  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:56.971164  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:57.470798  525066 type.go:168] "Request Body" body=""
	I1212 00:33:57.470887  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:57.471201  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:57.970983  525066 type.go:168] "Request Body" body=""
	I1212 00:33:57.971057  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:57.971379  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:57.971435  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:58.471288  525066 type.go:168] "Request Body" body=""
	I1212 00:33:58.471374  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:58.471710  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:58.970417  525066 type.go:168] "Request Body" body=""
	I1212 00:33:58.970485  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:58.970786  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:59.470436  525066 type.go:168] "Request Body" body=""
	I1212 00:33:59.470537  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:59.470835  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:59.970792  525066 type.go:168] "Request Body" body=""
	I1212 00:33:59.970864  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:59.971190  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:00.471100  525066 type.go:168] "Request Body" body=""
	I1212 00:34:00.471178  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:00.471446  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:00.471491  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:00.971317  525066 type.go:168] "Request Body" body=""
	I1212 00:34:00.971392  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:00.971704  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:01.470405  525066 type.go:168] "Request Body" body=""
	I1212 00:34:01.470478  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:01.470824  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:01.970499  525066 type.go:168] "Request Body" body=""
	I1212 00:34:01.970587  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:01.970878  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:02.470450  525066 type.go:168] "Request Body" body=""
	I1212 00:34:02.470548  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:02.470875  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:02.970486  525066 type.go:168] "Request Body" body=""
	I1212 00:34:02.970558  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:02.970903  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:02.970960  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:03.470466  525066 type.go:168] "Request Body" body=""
	I1212 00:34:03.470543  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:03.470886  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:03.970447  525066 type.go:168] "Request Body" body=""
	I1212 00:34:03.970517  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:03.970835  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:04.470526  525066 type.go:168] "Request Body" body=""
	I1212 00:34:04.470601  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:04.470936  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:04.970883  525066 type.go:168] "Request Body" body=""
	I1212 00:34:04.970968  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:04.971228  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:04.971278  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:05.471031  525066 type.go:168] "Request Body" body=""
	I1212 00:34:05.471105  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:05.471416  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:05.971158  525066 type.go:168] "Request Body" body=""
	I1212 00:34:05.971232  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:05.971554  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:06.471186  525066 type.go:168] "Request Body" body=""
	I1212 00:34:06.471254  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:06.471579  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:06.971380  525066 type.go:168] "Request Body" body=""
	I1212 00:34:06.971454  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:06.971795  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:06.971845  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:07.470453  525066 type.go:168] "Request Body" body=""
	I1212 00:34:07.470532  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:07.470891  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:07.970570  525066 type.go:168] "Request Body" body=""
	I1212 00:34:07.970640  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:07.970974  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:08.470454  525066 type.go:168] "Request Body" body=""
	I1212 00:34:08.470526  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:08.470855  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:08.970451  525066 type.go:168] "Request Body" body=""
	I1212 00:34:08.970523  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:08.970873  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:09.470384  525066 type.go:168] "Request Body" body=""
	I1212 00:34:09.470463  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:09.470759  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:09.470810  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:09.970476  525066 type.go:168] "Request Body" body=""
	I1212 00:34:09.970548  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:09.970914  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:10.470339  525066 type.go:168] "Request Body" body=""
	I1212 00:34:10.470412  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:10.470749  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:10.970418  525066 type.go:168] "Request Body" body=""
	I1212 00:34:10.970489  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:10.970837  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:11.470433  525066 type.go:168] "Request Body" body=""
	I1212 00:34:11.470511  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:11.470863  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:11.470914  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:11.970593  525066 type.go:168] "Request Body" body=""
	I1212 00:34:11.970672  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:11.971074  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:12.470513  525066 type.go:168] "Request Body" body=""
	I1212 00:34:12.470580  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:12.470938  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:12.970456  525066 type.go:168] "Request Body" body=""
	I1212 00:34:12.970537  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:12.970882  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:13.470574  525066 type.go:168] "Request Body" body=""
	I1212 00:34:13.470650  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:13.471032  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:13.471090  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:13.970419  525066 type.go:168] "Request Body" body=""
	I1212 00:34:13.970498  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:13.970791  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:14.470474  525066 type.go:168] "Request Body" body=""
	I1212 00:34:14.470543  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:14.470845  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:14.970750  525066 type.go:168] "Request Body" body=""
	I1212 00:34:14.970824  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:14.971143  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:15.470788  525066 type.go:168] "Request Body" body=""
	I1212 00:34:15.470856  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:15.471125  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:15.471166  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:15.970468  525066 type.go:168] "Request Body" body=""
	I1212 00:34:15.970542  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:15.970859  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:16.470484  525066 type.go:168] "Request Body" body=""
	I1212 00:34:16.470555  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:16.470882  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:16.970423  525066 type.go:168] "Request Body" body=""
	I1212 00:34:16.970496  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:16.970812  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:17.470487  525066 type.go:168] "Request Body" body=""
	I1212 00:34:17.470558  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:17.470975  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:17.970713  525066 type.go:168] "Request Body" body=""
	I1212 00:34:17.970796  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:17.971146  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:17.971201  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:18.470794  525066 type.go:168] "Request Body" body=""
	I1212 00:34:18.470865  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:18.471131  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:18.970459  525066 type.go:168] "Request Body" body=""
	I1212 00:34:18.970539  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:18.970892  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:19.470617  525066 type.go:168] "Request Body" body=""
	I1212 00:34:19.470700  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:19.471001  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:19.970879  525066 type.go:168] "Request Body" body=""
	I1212 00:34:19.970960  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:19.971231  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:19.971282  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:20.470446  525066 type.go:168] "Request Body" body=""
	I1212 00:34:20.470523  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:20.470854  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:20.970561  525066 type.go:168] "Request Body" body=""
	I1212 00:34:20.970634  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:20.970964  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:21.470548  525066 type.go:168] "Request Body" body=""
	I1212 00:34:21.470625  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:21.470970  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:21.970472  525066 type.go:168] "Request Body" body=""
	I1212 00:34:21.970545  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:21.970883  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:22.470450  525066 type.go:168] "Request Body" body=""
	I1212 00:34:22.470526  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:22.470863  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:22.470920  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:22.970409  525066 type.go:168] "Request Body" body=""
	I1212 00:34:22.970483  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:22.970826  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:23.470427  525066 type.go:168] "Request Body" body=""
	I1212 00:34:23.470503  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:23.470827  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:23.970523  525066 type.go:168] "Request Body" body=""
	I1212 00:34:23.970600  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:23.970934  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:24.470590  525066 type.go:168] "Request Body" body=""
	I1212 00:34:24.470656  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:24.470938  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:24.470979  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:24.971061  525066 type.go:168] "Request Body" body=""
	I1212 00:34:24.971133  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:24.971465  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:25.471283  525066 type.go:168] "Request Body" body=""
	I1212 00:34:25.471358  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:25.471678  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:25.970368  525066 type.go:168] "Request Body" body=""
	I1212 00:34:25.970434  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:25.970734  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:26.470429  525066 type.go:168] "Request Body" body=""
	I1212 00:34:26.470509  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:26.470892  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:26.970602  525066 type.go:168] "Request Body" body=""
	I1212 00:34:26.970714  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:26.971045  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:26.971097  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:27.470596  525066 type.go:168] "Request Body" body=""
	I1212 00:34:27.470664  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:27.470983  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:27.970426  525066 type.go:168] "Request Body" body=""
	I1212 00:34:27.970519  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:27.970812  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:28.470515  525066 type.go:168] "Request Body" body=""
	I1212 00:34:28.470605  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:28.470981  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:28.970522  525066 type.go:168] "Request Body" body=""
	I1212 00:34:28.970588  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:28.970857  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:29.470468  525066 type.go:168] "Request Body" body=""
	I1212 00:34:29.470544  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:29.470864  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:29.470919  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:29.970798  525066 type.go:168] "Request Body" body=""
	I1212 00:34:29.970869  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:29.971213  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:30.471137  525066 type.go:168] "Request Body" body=""
	I1212 00:34:30.471225  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:30.471550  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:30.971357  525066 type.go:168] "Request Body" body=""
	I1212 00:34:30.971431  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:30.971742  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:31.470472  525066 type.go:168] "Request Body" body=""
	I1212 00:34:31.470560  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:31.470967  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:31.471019  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:31.970675  525066 type.go:168] "Request Body" body=""
	I1212 00:34:31.970764  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:31.971052  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:32.470453  525066 type.go:168] "Request Body" body=""
	I1212 00:34:32.470527  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:32.470874  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:32.970593  525066 type.go:168] "Request Body" body=""
	I1212 00:34:32.970672  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:32.971032  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:33.470510  525066 type.go:168] "Request Body" body=""
	I1212 00:34:33.470583  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:33.470858  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:33.970552  525066 type.go:168] "Request Body" body=""
	I1212 00:34:33.970631  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:33.970999  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:33.971054  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:34.470581  525066 type.go:168] "Request Body" body=""
	I1212 00:34:34.470663  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:34.471029  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:34.970856  525066 type.go:168] "Request Body" body=""
	I1212 00:34:34.970934  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:34.971203  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:35.470956  525066 type.go:168] "Request Body" body=""
	I1212 00:34:35.471029  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:35.471364  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:35.971153  525066 type.go:168] "Request Body" body=""
	I1212 00:34:35.971231  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:35.971540  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:35.971597  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:36.471328  525066 type.go:168] "Request Body" body=""
	I1212 00:34:36.471400  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:36.471724  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:36.970408  525066 type.go:168] "Request Body" body=""
	I1212 00:34:36.970487  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:36.970849  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:37.470460  525066 type.go:168] "Request Body" body=""
	I1212 00:34:37.470535  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:37.470898  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:37.970425  525066 type.go:168] "Request Body" body=""
	I1212 00:34:37.970536  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:37.970843  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:38.470574  525066 type.go:168] "Request Body" body=""
	I1212 00:34:38.470652  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:38.470997  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:38.471051  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:38.970762  525066 type.go:168] "Request Body" body=""
	I1212 00:34:38.970837  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:38.971165  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:39.470821  525066 type.go:168] "Request Body" body=""
	I1212 00:34:39.470925  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:39.471276  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:39.971132  525066 type.go:168] "Request Body" body=""
	I1212 00:34:39.971208  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:39.971504  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:40.470403  525066 type.go:168] "Request Body" body=""
	I1212 00:34:40.470483  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:40.470859  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:40.970437  525066 type.go:168] "Request Body" body=""
	I1212 00:34:40.970509  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:40.970780  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:40.970828  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:41.470472  525066 type.go:168] "Request Body" body=""
	I1212 00:34:41.470541  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:41.471156  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:41.970992  525066 type.go:168] "Request Body" body=""
	I1212 00:34:41.971063  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:41.971400  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:42.471111  525066 type.go:168] "Request Body" body=""
	I1212 00:34:42.471182  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:42.471437  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:42.971249  525066 type.go:168] "Request Body" body=""
	I1212 00:34:42.971318  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:42.971637  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:42.971693  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:43.470373  525066 type.go:168] "Request Body" body=""
	I1212 00:34:43.470452  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:43.470770  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:43.970406  525066 type.go:168] "Request Body" body=""
	I1212 00:34:43.970484  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:43.970813  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:44.470444  525066 type.go:168] "Request Body" body=""
	I1212 00:34:44.470522  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:44.470871  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:44.970718  525066 type.go:168] "Request Body" body=""
	I1212 00:34:44.970796  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:44.971129  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:45.470791  525066 type.go:168] "Request Body" body=""
	I1212 00:34:45.470857  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:45.471157  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:45.471208  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:45.971058  525066 type.go:168] "Request Body" body=""
	I1212 00:34:45.971144  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:45.971575  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:46.471360  525066 type.go:168] "Request Body" body=""
	I1212 00:34:46.471430  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:46.471804  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:46.970483  525066 type.go:168] "Request Body" body=""
	I1212 00:34:46.970548  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:46.970854  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:47.470504  525066 type.go:168] "Request Body" body=""
	I1212 00:34:47.470579  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:47.470919  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:47.970616  525066 type.go:168] "Request Body" body=""
	I1212 00:34:47.970715  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:47.971061  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:47.971117  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:48.470377  525066 type.go:168] "Request Body" body=""
	I1212 00:34:48.470445  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:48.470744  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:48.970454  525066 type.go:168] "Request Body" body=""
	I1212 00:34:48.970525  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:48.970871  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:49.470443  525066 type.go:168] "Request Body" body=""
	I1212 00:34:49.470517  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:49.470842  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:49.970842  525066 type.go:168] "Request Body" body=""
	I1212 00:34:49.970921  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:49.971185  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:49.971235  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:50.471289  525066 type.go:168] "Request Body" body=""
	I1212 00:34:50.471363  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:50.471683  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:50.970385  525066 type.go:168] "Request Body" body=""
	I1212 00:34:50.970470  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:50.970820  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:51.470498  525066 type.go:168] "Request Body" body=""
	I1212 00:34:51.470577  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:51.470901  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:51.970643  525066 type.go:168] "Request Body" body=""
	I1212 00:34:51.970737  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:51.971081  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:52.470467  525066 type.go:168] "Request Body" body=""
	I1212 00:34:52.470543  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:52.470852  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:52.470906  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:52.970347  525066 type.go:168] "Request Body" body=""
	I1212 00:34:52.970413  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:52.970656  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:53.470361  525066 type.go:168] "Request Body" body=""
	I1212 00:34:53.470433  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:53.470758  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:53.970445  525066 type.go:168] "Request Body" body=""
	I1212 00:34:53.970521  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:53.970846  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:54.470385  525066 type.go:168] "Request Body" body=""
	I1212 00:34:54.470456  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:54.470728  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:54.970731  525066 type.go:168] "Request Body" body=""
	I1212 00:34:54.970808  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:54.971141  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:54.971194  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:55.470962  525066 type.go:168] "Request Body" body=""
	I1212 00:34:55.471032  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:55.471384  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:55.971164  525066 type.go:168] "Request Body" body=""
	I1212 00:34:55.971235  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:55.971578  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:56.471360  525066 type.go:168] "Request Body" body=""
	I1212 00:34:56.471429  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:56.471744  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:56.970433  525066 type.go:168] "Request Body" body=""
	I1212 00:34:56.970514  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:56.970841  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:57.470322  525066 type.go:168] "Request Body" body=""
	I1212 00:34:57.470393  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:57.470705  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:57.470754  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:57.970446  525066 type.go:168] "Request Body" body=""
	I1212 00:34:57.970517  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:57.970859  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:58.470584  525066 type.go:168] "Request Body" body=""
	I1212 00:34:58.470658  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:58.470977  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:58.970390  525066 type.go:168] "Request Body" body=""
	I1212 00:34:58.970459  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:58.970753  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:59.470435  525066 type.go:168] "Request Body" body=""
	I1212 00:34:59.470506  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:59.470847  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:59.470910  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:59.970877  525066 type.go:168] "Request Body" body=""
	I1212 00:34:59.970974  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:59.971302  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:00.471231  525066 type.go:168] "Request Body" body=""
	I1212 00:35:00.471314  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:00.471616  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:00.970336  525066 type.go:168] "Request Body" body=""
	I1212 00:35:00.970416  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:00.970781  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:01.470511  525066 type.go:168] "Request Body" body=""
	I1212 00:35:01.470600  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:01.470948  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:35:01.471002  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:35:01.970437  525066 type.go:168] "Request Body" body=""
	I1212 00:35:01.970523  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:01.970847  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:02.470535  525066 type.go:168] "Request Body" body=""
	I1212 00:35:02.470612  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:02.470975  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:02.970668  525066 type.go:168] "Request Body" body=""
	I1212 00:35:02.970763  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:02.971094  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:03.470416  525066 type.go:168] "Request Body" body=""
	I1212 00:35:03.470482  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:03.470744  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:03.970472  525066 type.go:168] "Request Body" body=""
	I1212 00:35:03.970553  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:03.970942  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:35:03.971000  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:35:04.470451  525066 type.go:168] "Request Body" body=""
	I1212 00:35:04.470558  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:04.470919  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:04.970804  525066 type.go:168] "Request Body" body=""
	I1212 00:35:04.970871  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:04.971144  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:05.470467  525066 type.go:168] "Request Body" body=""
	I1212 00:35:05.470537  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:05.470942  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:05.970499  525066 type.go:168] "Request Body" body=""
	I1212 00:35:05.970585  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:05.970938  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:06.470332  525066 type.go:168] "Request Body" body=""
	I1212 00:35:06.470408  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:06.470730  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:35:06.470781  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:35:06.970454  525066 type.go:168] "Request Body" body=""
	I1212 00:35:06.970525  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:06.970921  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:07.470487  525066 type.go:168] "Request Body" body=""
	I1212 00:35:07.470570  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:07.470917  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:07.970583  525066 type.go:168] "Request Body" body=""
	I1212 00:35:07.970653  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:07.970985  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:08.470669  525066 type.go:168] "Request Body" body=""
	I1212 00:35:08.470768  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:08.471111  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:35:08.471164  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:35:08.970674  525066 type.go:168] "Request Body" body=""
	I1212 00:35:08.970770  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:08.971117  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:09.470791  525066 type.go:168] "Request Body" body=""
	I1212 00:35:09.470928  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:09.471187  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:09.971168  525066 type.go:168] "Request Body" body=""
	I1212 00:35:09.971242  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:09.971558  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:10.470457  525066 type.go:168] "Request Body" body=""
	I1212 00:35:10.470566  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:10.470928  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:10.970414  525066 type.go:168] "Request Body" body=""
	I1212 00:35:10.970483  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:10.970810  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:35:10.970861  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:35:11.470561  525066 type.go:168] "Request Body" body=""
	I1212 00:35:11.470643  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:11.471038  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:11.970830  525066 type.go:168] "Request Body" body=""
	I1212 00:35:11.970907  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:11.971183  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:12.470991  525066 type.go:168] "Request Body" body=""
	I1212 00:35:12.471059  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:12.471390  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:12.971182  525066 type.go:168] "Request Body" body=""
	I1212 00:35:12.971260  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:12.971601  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:35:12.971680  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:35:13.471284  525066 type.go:168] "Request Body" body=""
	I1212 00:35:13.471356  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:13.471730  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:13.970398  525066 type.go:168] "Request Body" body=""
	I1212 00:35:13.970477  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:13.970795  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:14.470449  525066 type.go:168] "Request Body" body=""
	I1212 00:35:14.470529  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:14.470838  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:14.970781  525066 type.go:168] "Request Body" body=""
	I1212 00:35:14.970875  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:14.971268  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:15.471033  525066 type.go:168] "Request Body" body=""
	I1212 00:35:15.471104  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:15.471367  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:35:15.471407  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:35:15.971146  525066 type.go:168] "Request Body" body=""
	I1212 00:35:15.971216  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:15.971526  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:16.471298  525066 type.go:168] "Request Body" body=""
	I1212 00:35:16.471376  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:16.471748  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:16.970401  525066 type.go:168] "Request Body" body=""
	I1212 00:35:16.970468  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:16.970761  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:17.470431  525066 type.go:168] "Request Body" body=""
	I1212 00:35:17.470501  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:17.470854  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:17.970433  525066 type.go:168] "Request Body" body=""
	I1212 00:35:17.970504  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:17.970880  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:35:17.970936  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:35:18.470421  525066 type.go:168] "Request Body" body=""
	I1212 00:35:18.470491  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:18.470768  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:18.970447  525066 type.go:168] "Request Body" body=""
	I1212 00:35:18.970526  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:18.970872  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:19.470463  525066 type.go:168] "Request Body" body=""
	I1212 00:35:19.470546  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:19.470905  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:19.970857  525066 type.go:168] "Request Body" body=""
	I1212 00:35:19.970930  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:19.971189  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:35:19.971235  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:35:20.471222  525066 type.go:168] "Request Body" body=""
	I1212 00:35:20.471296  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:20.471592  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:20.971375  525066 type.go:168] "Request Body" body=""
	I1212 00:35:20.971447  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:20.971753  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:21.470418  525066 type.go:168] "Request Body" body=""
	I1212 00:35:21.470490  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:21.470805  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:21.970405  525066 type.go:168] "Request Body" body=""
	I1212 00:35:21.970484  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:21.970793  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:22.470400  525066 type.go:168] "Request Body" body=""
	I1212 00:35:22.470486  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:22.470834  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:35:22.470893  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:35:22.970432  525066 type.go:168] "Request Body" body=""
	I1212 00:35:22.970507  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:22.970864  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:23.470610  525066 type.go:168] "Request Body" body=""
	I1212 00:35:23.470694  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:23.471022  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:23.970472  525066 type.go:168] "Request Body" body=""
	I1212 00:35:23.970544  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:23.970934  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:24.470526  525066 type.go:168] "Request Body" body=""
	I1212 00:35:24.470602  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:24.470885  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:35:24.470937  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:35:24.970814  525066 type.go:168] "Request Body" body=""
	I1212 00:35:24.970900  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:24.971212  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:25.470981  525066 type.go:168] "Request Body" body=""
	I1212 00:35:25.471083  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:25.471412  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:25.971186  525066 type.go:168] "Request Body" body=""
	I1212 00:35:25.971270  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:25.971542  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:26.471296  525066 type.go:168] "Request Body" body=""
	I1212 00:35:26.471372  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:26.471691  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:35:26.471748  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:35:26.970425  525066 type.go:168] "Request Body" body=""
	I1212 00:35:26.970494  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:26.970788  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:27.470473  525066 type.go:168] "Request Body" body=""
	I1212 00:35:27.470545  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:27.470900  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:27.970608  525066 type.go:168] "Request Body" body=""
	I1212 00:35:27.970694  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:27.971049  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:28.470775  525066 type.go:168] "Request Body" body=""
	I1212 00:35:28.470856  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:28.471187  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:28.970958  525066 type.go:168] "Request Body" body=""
	I1212 00:35:28.971022  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:28.971277  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:35:28.971316  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:35:29.471162  525066 type.go:168] "Request Body" body=""
	I1212 00:35:29.471240  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:29.471593  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:29.970376  525066 type.go:168] "Request Body" body=""
	I1212 00:35:29.970454  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:29.970816  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:30.471109  525066 type.go:168] "Request Body" body=""
	I1212 00:35:30.471183  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:30.471480  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:30.971287  525066 type.go:168] "Request Body" body=""
	I1212 00:35:30.971360  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:30.971672  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:35:30.971729  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:35:31.470405  525066 type.go:168] "Request Body" body=""
	I1212 00:35:31.470485  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:31.470830  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:31.970549  525066 type.go:168] "Request Body" body=""
	I1212 00:35:31.970619  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:31.970957  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:32.470649  525066 type.go:168] "Request Body" body=""
	I1212 00:35:32.470745  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:32.471093  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:32.970460  525066 type.go:168] "Request Body" body=""
	I1212 00:35:32.970533  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:32.970861  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:33.470443  525066 type.go:168] "Request Body" body=""
	I1212 00:35:33.470511  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:33.470783  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:35:33.470825  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:35:33.970454  525066 type.go:168] "Request Body" body=""
	I1212 00:35:33.970535  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:33.970883  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:34.470595  525066 type.go:168] "Request Body" body=""
	I1212 00:35:34.470673  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:34.471021  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:34.970778  525066 type.go:168] "Request Body" body=""
	I1212 00:35:34.970845  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:34.971108  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:35.470789  525066 type.go:168] "Request Body" body=""
	I1212 00:35:35.470893  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:35.471408  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:35:35.471455  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:35:35.971178  525066 type.go:168] "Request Body" body=""
	I1212 00:35:35.971249  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:35.971545  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:36.471287  525066 type.go:168] "Request Body" body=""
	I1212 00:35:36.471358  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:36.471623  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:36.970386  525066 type.go:168] "Request Body" body=""
	I1212 00:35:36.970468  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:36.970815  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:37.470527  525066 type.go:168] "Request Body" body=""
	I1212 00:35:37.470612  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:37.470950  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:37.970440  525066 type.go:168] "Request Body" body=""
	I1212 00:35:37.970510  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:37.970824  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:35:37.970880  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:35:38.470422  525066 type.go:168] "Request Body" body=""
	I1212 00:35:38.470503  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:38.470828  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:38.970459  525066 type.go:168] "Request Body" body=""
	I1212 00:35:38.970529  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:38.970885  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:39.470567  525066 type.go:168] "Request Body" body=""
	I1212 00:35:39.470634  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:39.470915  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:39.971016  525066 type.go:168] "Request Body" body=""
	I1212 00:35:39.971090  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:39.971449  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:35:39.971507  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:35:40.470458  525066 type.go:168] "Request Body" body=""
	I1212 00:35:40.470535  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:40.470907  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:40.970388  525066 type.go:168] "Request Body" body=""
	I1212 00:35:40.970449  525066 node_ready.go:38] duration metric: took 6m0.000230679s for node "functional-035643" to be "Ready" ...
	I1212 00:35:40.973928  525066 out.go:203] 
	W1212 00:35:40.976747  525066 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1212 00:35:40.976773  525066 out.go:285] * 
	W1212 00:35:40.981440  525066 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 00:35:40.984739  525066 out.go:203] 
	
	
	==> CRI-O <==
	Dec 12 00:35:50 functional-035643 crio[5335]: time="2025-12-12T00:35:50.093738425Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=306252a9-50d1-4cf7-879f-649314fb6779 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:35:51 functional-035643 crio[5335]: time="2025-12-12T00:35:51.166350422Z" level=info msg="Checking image status: minikube-local-cache-test:functional-035643" id=6ec98bc0-0fe5-4d00-8dbc-0c76e49800c9 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:35:51 functional-035643 crio[5335]: time="2025-12-12T00:35:51.166547981Z" level=info msg="Resolving \"minikube-local-cache-test\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 12 00:35:51 functional-035643 crio[5335]: time="2025-12-12T00:35:51.166597826Z" level=info msg="Image minikube-local-cache-test:functional-035643 not found" id=6ec98bc0-0fe5-4d00-8dbc-0c76e49800c9 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:35:51 functional-035643 crio[5335]: time="2025-12-12T00:35:51.166674804Z" level=info msg="Neither image nor artfiact minikube-local-cache-test:functional-035643 found" id=6ec98bc0-0fe5-4d00-8dbc-0c76e49800c9 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:35:51 functional-035643 crio[5335]: time="2025-12-12T00:35:51.191658974Z" level=info msg="Checking image status: docker.io/library/minikube-local-cache-test:functional-035643" id=2cec4339-ccdf-4bb9-bbfe-6be8600e4cfb name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:35:51 functional-035643 crio[5335]: time="2025-12-12T00:35:51.19180233Z" level=info msg="Image docker.io/library/minikube-local-cache-test:functional-035643 not found" id=2cec4339-ccdf-4bb9-bbfe-6be8600e4cfb name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:35:51 functional-035643 crio[5335]: time="2025-12-12T00:35:51.191844077Z" level=info msg="Neither image nor artfiact docker.io/library/minikube-local-cache-test:functional-035643 found" id=2cec4339-ccdf-4bb9-bbfe-6be8600e4cfb name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:35:51 functional-035643 crio[5335]: time="2025-12-12T00:35:51.217264929Z" level=info msg="Checking image status: localhost/library/minikube-local-cache-test:functional-035643" id=37843a35-a46e-4c4e-8f5c-a022b5d36fb4 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:35:51 functional-035643 crio[5335]: time="2025-12-12T00:35:51.217422563Z" level=info msg="Image localhost/library/minikube-local-cache-test:functional-035643 not found" id=37843a35-a46e-4c4e-8f5c-a022b5d36fb4 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:35:51 functional-035643 crio[5335]: time="2025-12-12T00:35:51.217481129Z" level=info msg="Neither image nor artfiact localhost/library/minikube-local-cache-test:functional-035643 found" id=37843a35-a46e-4c4e-8f5c-a022b5d36fb4 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:35:52 functional-035643 crio[5335]: time="2025-12-12T00:35:52.176503978Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=0e38953a-955f-4bff-9e4f-43d0bbe4bbce name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:35:52 functional-035643 crio[5335]: time="2025-12-12T00:35:52.504101247Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=ea64e7ef-dc23-4ef6-bc3a-61a9d8f40935 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:35:52 functional-035643 crio[5335]: time="2025-12-12T00:35:52.50424252Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=ea64e7ef-dc23-4ef6-bc3a-61a9d8f40935 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:35:52 functional-035643 crio[5335]: time="2025-12-12T00:35:52.504277284Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=ea64e7ef-dc23-4ef6-bc3a-61a9d8f40935 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:35:53 functional-035643 crio[5335]: time="2025-12-12T00:35:53.112644556Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=4dea7078-8f6a-4566-b66f-f279f8eb3817 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:35:53 functional-035643 crio[5335]: time="2025-12-12T00:35:53.112790915Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=4dea7078-8f6a-4566-b66f-f279f8eb3817 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:35:53 functional-035643 crio[5335]: time="2025-12-12T00:35:53.112826672Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=4dea7078-8f6a-4566-b66f-f279f8eb3817 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:35:53 functional-035643 crio[5335]: time="2025-12-12T00:35:53.135996574Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=fbf6d603-4b1c-4b27-a3bc-c9cd9d3d9669 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:35:53 functional-035643 crio[5335]: time="2025-12-12T00:35:53.136166318Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=fbf6d603-4b1c-4b27-a3bc-c9cd9d3d9669 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:35:53 functional-035643 crio[5335]: time="2025-12-12T00:35:53.136218017Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=fbf6d603-4b1c-4b27-a3bc-c9cd9d3d9669 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:35:53 functional-035643 crio[5335]: time="2025-12-12T00:35:53.161115157Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=9c700902-8781-4f47-a05e-d73425f0903f name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:35:53 functional-035643 crio[5335]: time="2025-12-12T00:35:53.161266193Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=9c700902-8781-4f47-a05e-d73425f0903f name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:35:53 functional-035643 crio[5335]: time="2025-12-12T00:35:53.161316358Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=9c700902-8781-4f47-a05e-d73425f0903f name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:35:53 functional-035643 crio[5335]: time="2025-12-12T00:35:53.688624008Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=95959e99-0c5e-4a6b-b332-e8c760445a5d name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:35:55.227132    9356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:35:55.228079    9356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:35:55.229865    9356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:35:55.230174    9356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:35:55.232507    9356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec11 23:45] hrtimer: interrupt took 13740716 ns
	[Dec12 00:10] kauditd_printk_skb: 8 callbacks suppressed
	[Dec12 00:11] overlayfs: idmapped layers are currently not supported
	[  +0.124336] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec12 00:17] overlayfs: idmapped layers are currently not supported
	[Dec12 00:18] overlayfs: idmapped layers are currently not supported
	[Dec12 00:35] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 00:35:55 up  3:18,  0 user,  load average: 0.31, 0.32, 0.79
	Linux functional-035643 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 12 00:35:53 functional-035643 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 00:35:53 functional-035643 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1155.
	Dec 12 00:35:53 functional-035643 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:35:53 functional-035643 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:35:53 functional-035643 kubelet[9252]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 00:35:53 functional-035643 kubelet[9252]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 00:35:53 functional-035643 kubelet[9252]: E1212 00:35:53.755962    9252 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 00:35:53 functional-035643 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 00:35:53 functional-035643 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 00:35:54 functional-035643 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1156.
	Dec 12 00:35:54 functional-035643 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:35:54 functional-035643 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:35:54 functional-035643 kubelet[9272]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 00:35:54 functional-035643 kubelet[9272]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 00:35:54 functional-035643 kubelet[9272]: E1212 00:35:54.531843    9272 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 00:35:54 functional-035643 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 00:35:54 functional-035643 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 00:35:55 functional-035643 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1157.
	Dec 12 00:35:55 functional-035643 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:35:55 functional-035643 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:35:55 functional-035643 kubelet[9361]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 00:35:55 functional-035643 kubelet[9361]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 00:35:55 functional-035643 kubelet[9361]: E1212 00:35:55.277453    9361 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 00:35:55 functional-035643 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 00:35:55 functional-035643 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-035643 -n functional-035643
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-035643 -n functional-035643: exit status 2 (356.33372ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-035643" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (2.35s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (2.45s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-035643 get pods
functional_test.go:756: (dbg) Non-zero exit: out/kubectl --context functional-035643 get pods: exit status 1 (107.620435ms)

                                                
                                                
** stderr ** 
	E1212 00:35:56.284930  530649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1212 00:35:56.285343  530649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1212 00:35:56.286843  530649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1212 00:35:56.287147  530649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1212 00:35:56.288591  530649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:759: failed to run kubectl directly. args "out/kubectl --context functional-035643 get pods": exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-035643
helpers_test.go:244: (dbg) docker inspect functional-035643:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "02b8c8e636a5f9c9930bd279101be257c50eb00805c3c8fd0285e10206ff115a",
	        "Created": "2025-12-12T00:21:16.539894649Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 519641,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T00:21:16.600605162Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/02b8c8e636a5f9c9930bd279101be257c50eb00805c3c8fd0285e10206ff115a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/02b8c8e636a5f9c9930bd279101be257c50eb00805c3c8fd0285e10206ff115a/hostname",
	        "HostsPath": "/var/lib/docker/containers/02b8c8e636a5f9c9930bd279101be257c50eb00805c3c8fd0285e10206ff115a/hosts",
	        "LogPath": "/var/lib/docker/containers/02b8c8e636a5f9c9930bd279101be257c50eb00805c3c8fd0285e10206ff115a/02b8c8e636a5f9c9930bd279101be257c50eb00805c3c8fd0285e10206ff115a-json.log",
	        "Name": "/functional-035643",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-035643:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-035643",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "02b8c8e636a5f9c9930bd279101be257c50eb00805c3c8fd0285e10206ff115a",
	                "LowerDir": "/var/lib/docker/overlay2/fac4f7ad025123b29aaa4f718217dd9d72b542fc9949b8afc2ffc326afc38542-init/diff:/var/lib/docker/overlay2/312acdcca8c5c90ada236fa0dd866f841348e5b8485928af37d3628cccc20197/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fac4f7ad025123b29aaa4f718217dd9d72b542fc9949b8afc2ffc326afc38542/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fac4f7ad025123b29aaa4f718217dd9d72b542fc9949b8afc2ffc326afc38542/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fac4f7ad025123b29aaa4f718217dd9d72b542fc9949b8afc2ffc326afc38542/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-035643",
	                "Source": "/var/lib/docker/volumes/functional-035643/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-035643",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-035643",
	                "name.minikube.sigs.k8s.io": "functional-035643",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ede6a17442d6bf83b8f4c9f93f252345cec3d0406f82de2d6bd2cfd4713e2163",
	            "SandboxKey": "/var/run/docker/netns/ede6a17442d6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33183"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33184"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33187"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33185"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33186"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-035643": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "52:d5:12:89:ea:40",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ad01995b183fdebead6c725e2b942ae8ce2d3964b3552789fe5b50ee7e7239a3",
	                    "EndpointID": "d429a1cd0f840d042af4ad7ea0bda6067a342be7fb552083411004a3604b0124",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-035643",
	                        "02b8c8e636a5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-035643 -n functional-035643
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-035643 -n functional-035643: exit status 2 (309.779011ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-035643 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p functional-035643 logs -n 25: (1.060215199s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-921447 image ls --format yaml --alsologtostderr                                                                                        │ functional-921447 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ image   │ functional-921447 image ls --format short --alsologtostderr                                                                                       │ functional-921447 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ image   │ functional-921447 image ls --format json --alsologtostderr                                                                                        │ functional-921447 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ ssh     │ functional-921447 ssh pgrep buildkitd                                                                                                             │ functional-921447 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │                     │
	│ image   │ functional-921447 image ls --format table --alsologtostderr                                                                                       │ functional-921447 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ image   │ functional-921447 image build -t localhost/my-image:functional-921447 testdata/build --alsologtostderr                                            │ functional-921447 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ image   │ functional-921447 image ls                                                                                                                        │ functional-921447 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ delete  │ -p functional-921447                                                                                                                              │ functional-921447 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ start   │ -p functional-035643 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │                     │
	│ start   │ -p functional-035643 --alsologtostderr -v=8                                                                                                       │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:29 UTC │                     │
	│ cache   │ functional-035643 cache add registry.k8s.io/pause:3.1                                                                                             │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ cache   │ functional-035643 cache add registry.k8s.io/pause:3.3                                                                                             │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ cache   │ functional-035643 cache add registry.k8s.io/pause:latest                                                                                          │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ cache   │ functional-035643 cache add minikube-local-cache-test:functional-035643                                                                           │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ cache   │ functional-035643 cache delete minikube-local-cache-test:functional-035643                                                                        │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ cache   │ list                                                                                                                                              │ minikube          │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ ssh     │ functional-035643 ssh sudo crictl images                                                                                                          │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ ssh     │ functional-035643 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ ssh     │ functional-035643 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │                     │
	│ cache   │ functional-035643 cache reload                                                                                                                    │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ ssh     │ functional-035643 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                               │ minikube          │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ kubectl │ functional-035643 kubectl -- --context functional-035643 get pods                                                                                 │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 00:29:34
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 00:29:34.833608  525066 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:29:34.833799  525066 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:29:34.833830  525066 out.go:374] Setting ErrFile to fd 2...
	I1212 00:29:34.833859  525066 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:29:34.834244  525066 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-487723/.minikube/bin
	I1212 00:29:34.834787  525066 out.go:368] Setting JSON to false
	I1212 00:29:34.835727  525066 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":11520,"bootTime":1765487855,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1212 00:29:34.836335  525066 start.go:143] virtualization:  
	I1212 00:29:34.841302  525066 out.go:179] * [functional-035643] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 00:29:34.846669  525066 out.go:179]   - MINIKUBE_LOCATION=22101
	I1212 00:29:34.846785  525066 notify.go:221] Checking for updates...
	I1212 00:29:34.852399  525066 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:29:34.855222  525066 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22101-487723/kubeconfig
	I1212 00:29:34.857924  525066 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-487723/.minikube
	I1212 00:29:34.860585  525066 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 00:29:34.863145  525066 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:29:34.866639  525066 config.go:182] Loaded profile config "functional-035643": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 00:29:34.866818  525066 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 00:29:34.892569  525066 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 00:29:34.892680  525066 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:29:34.954074  525066 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 00:29:34.944774098 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 00:29:34.954186  525066 docker.go:319] overlay module found
	I1212 00:29:34.958427  525066 out.go:179] * Using the docker driver based on existing profile
	I1212 00:29:34.960983  525066 start.go:309] selected driver: docker
	I1212 00:29:34.961005  525066 start.go:927] validating driver "docker" against &{Name:functional-035643 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-035643 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:29:34.961104  525066 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:29:34.961212  525066 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:29:35.019269  525066 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 00:29:35.008770771 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 00:29:35.019716  525066 cni.go:84] Creating CNI manager for ""
	I1212 00:29:35.019778  525066 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 00:29:35.019842  525066 start.go:353] cluster config:
	{Name:functional-035643 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-035643 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:29:35.022879  525066 out.go:179] * Starting "functional-035643" primary control-plane node in "functional-035643" cluster
	I1212 00:29:35.025659  525066 cache.go:134] Beginning downloading kic base image for docker with crio
	I1212 00:29:35.028463  525066 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1212 00:29:35.031434  525066 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1212 00:29:35.031495  525066 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22101-487723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1212 00:29:35.031510  525066 cache.go:65] Caching tarball of preloaded images
	I1212 00:29:35.031544  525066 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1212 00:29:35.031603  525066 preload.go:238] Found /home/jenkins/minikube-integration/22101-487723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1212 00:29:35.031614  525066 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1212 00:29:35.031729  525066 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/config.json ...
	I1212 00:29:35.051219  525066 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1212 00:29:35.051245  525066 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1212 00:29:35.051267  525066 cache.go:243] Successfully downloaded all kic artifacts
	I1212 00:29:35.051303  525066 start.go:360] acquireMachinesLock for functional-035643: {Name:mkb0cdc7d354412594dc63c0234fde00134e758d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 00:29:35.051387  525066 start.go:364] duration metric: took 54.908µs to acquireMachinesLock for "functional-035643"
	I1212 00:29:35.051416  525066 start.go:96] Skipping create...Using existing machine configuration
	I1212 00:29:35.051428  525066 fix.go:54] fixHost starting: 
	I1212 00:29:35.051696  525066 cli_runner.go:164] Run: docker container inspect functional-035643 --format={{.State.Status}}
	I1212 00:29:35.069320  525066 fix.go:112] recreateIfNeeded on functional-035643: state=Running err=<nil>
	W1212 00:29:35.069352  525066 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 00:29:35.072554  525066 out.go:252] * Updating the running docker "functional-035643" container ...
	I1212 00:29:35.072600  525066 machine.go:94] provisionDockerMachine start ...
	I1212 00:29:35.072693  525066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
	I1212 00:29:35.090330  525066 main.go:143] libmachine: Using SSH client type: native
	I1212 00:29:35.090669  525066 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1212 00:29:35.090706  525066 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 00:29:35.238363  525066 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-035643
	
	I1212 00:29:35.238387  525066 ubuntu.go:182] provisioning hostname "functional-035643"
	I1212 00:29:35.238453  525066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
	I1212 00:29:35.256201  525066 main.go:143] libmachine: Using SSH client type: native
	I1212 00:29:35.256511  525066 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1212 00:29:35.256528  525066 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-035643 && echo "functional-035643" | sudo tee /etc/hostname
	I1212 00:29:35.418094  525066 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-035643
	
	I1212 00:29:35.418176  525066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
	I1212 00:29:35.436164  525066 main.go:143] libmachine: Using SSH client type: native
	I1212 00:29:35.436475  525066 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1212 00:29:35.436494  525066 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-035643' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-035643/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-035643' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 00:29:35.594938  525066 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 00:29:35.594969  525066 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22101-487723/.minikube CaCertPath:/home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22101-487723/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22101-487723/.minikube}
	I1212 00:29:35.595009  525066 ubuntu.go:190] setting up certificates
	I1212 00:29:35.595026  525066 provision.go:84] configureAuth start
	I1212 00:29:35.595111  525066 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-035643
	I1212 00:29:35.612398  525066 provision.go:143] copyHostCerts
	I1212 00:29:35.612439  525066 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22101-487723/.minikube/ca.pem
	I1212 00:29:35.612482  525066 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-487723/.minikube/ca.pem, removing ...
	I1212 00:29:35.612494  525066 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-487723/.minikube/ca.pem
	I1212 00:29:35.612571  525066 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22101-487723/.minikube/ca.pem (1078 bytes)
	I1212 00:29:35.612671  525066 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22101-487723/.minikube/cert.pem
	I1212 00:29:35.612699  525066 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-487723/.minikube/cert.pem, removing ...
	I1212 00:29:35.612707  525066 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-487723/.minikube/cert.pem
	I1212 00:29:35.612734  525066 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22101-487723/.minikube/cert.pem (1123 bytes)
	I1212 00:29:35.612781  525066 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22101-487723/.minikube/key.pem
	I1212 00:29:35.612802  525066 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-487723/.minikube/key.pem, removing ...
	I1212 00:29:35.612813  525066 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-487723/.minikube/key.pem
	I1212 00:29:35.612837  525066 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22101-487723/.minikube/key.pem (1679 bytes)
	I1212 00:29:35.612889  525066 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22101-487723/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca-key.pem org=jenkins.functional-035643 san=[127.0.0.1 192.168.49.2 functional-035643 localhost minikube]
	I1212 00:29:35.977748  525066 provision.go:177] copyRemoteCerts
	I1212 00:29:35.977818  525066 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 00:29:35.977857  525066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
	I1212 00:29:35.995348  525066 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/functional-035643/id_rsa Username:docker}
	I1212 00:29:36.106772  525066 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1212 00:29:36.106859  525066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 00:29:36.126035  525066 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1212 00:29:36.126112  525066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 00:29:36.143996  525066 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1212 00:29:36.144114  525066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 00:29:36.161387  525066 provision.go:87] duration metric: took 566.343959ms to configureAuth
	I1212 00:29:36.161415  525066 ubuntu.go:206] setting minikube options for container-runtime
	I1212 00:29:36.161612  525066 config.go:182] Loaded profile config "functional-035643": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 00:29:36.161722  525066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
	I1212 00:29:36.179565  525066 main.go:143] libmachine: Using SSH client type: native
	I1212 00:29:36.179872  525066 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1212 00:29:36.179896  525066 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 00:29:36.525259  525066 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 00:29:36.525285  525066 machine.go:97] duration metric: took 1.45267532s to provisionDockerMachine
	I1212 00:29:36.525297  525066 start.go:293] postStartSetup for "functional-035643" (driver="docker")
	I1212 00:29:36.525310  525066 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 00:29:36.525385  525066 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 00:29:36.525432  525066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
	I1212 00:29:36.544323  525066 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/functional-035643/id_rsa Username:docker}
	I1212 00:29:36.650745  525066 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 00:29:36.654027  525066 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1212 00:29:36.654058  525066 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1212 00:29:36.654063  525066 command_runner.go:130] > VERSION_ID="12"
	I1212 00:29:36.654067  525066 command_runner.go:130] > VERSION="12 (bookworm)"
	I1212 00:29:36.654072  525066 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1212 00:29:36.654076  525066 command_runner.go:130] > ID=debian
	I1212 00:29:36.654081  525066 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1212 00:29:36.654086  525066 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1212 00:29:36.654098  525066 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1212 00:29:36.654164  525066 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 00:29:36.654184  525066 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 00:29:36.654203  525066 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-487723/.minikube/addons for local assets ...
	I1212 00:29:36.654261  525066 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-487723/.minikube/files for local assets ...
	I1212 00:29:36.654368  525066 filesync.go:149] local asset: /home/jenkins/minikube-integration/22101-487723/.minikube/files/etc/ssl/certs/4909542.pem -> 4909542.pem in /etc/ssl/certs
	I1212 00:29:36.654379  525066 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/files/etc/ssl/certs/4909542.pem -> /etc/ssl/certs/4909542.pem
	I1212 00:29:36.654462  525066 filesync.go:149] local asset: /home/jenkins/minikube-integration/22101-487723/.minikube/files/etc/test/nested/copy/490954/hosts -> hosts in /etc/test/nested/copy/490954
	I1212 00:29:36.654470  525066 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/files/etc/test/nested/copy/490954/hosts -> /etc/test/nested/copy/490954/hosts
	I1212 00:29:36.654523  525066 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/490954
	I1212 00:29:36.661942  525066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/files/etc/ssl/certs/4909542.pem --> /etc/ssl/certs/4909542.pem (1708 bytes)
	I1212 00:29:36.678936  525066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/files/etc/test/nested/copy/490954/hosts --> /etc/test/nested/copy/490954/hosts (40 bytes)
	I1212 00:29:36.696209  525066 start.go:296] duration metric: took 170.896684ms for postStartSetup
	I1212 00:29:36.696330  525066 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 00:29:36.696401  525066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
	I1212 00:29:36.716202  525066 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/functional-035643/id_rsa Username:docker}
	I1212 00:29:36.819154  525066 command_runner.go:130] > 18%
	I1212 00:29:36.819742  525066 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 00:29:36.823869  525066 command_runner.go:130] > 160G
	I1212 00:29:36.824320  525066 fix.go:56] duration metric: took 1.772888094s for fixHost
	I1212 00:29:36.824342  525066 start.go:83] releasing machines lock for "functional-035643", held for 1.772938226s
	I1212 00:29:36.824419  525066 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-035643
	I1212 00:29:36.841414  525066 ssh_runner.go:195] Run: cat /version.json
	I1212 00:29:36.841444  525066 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 00:29:36.841465  525066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
	I1212 00:29:36.841499  525066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
	I1212 00:29:36.858975  525066 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/functional-035643/id_rsa Username:docker}
	I1212 00:29:36.864277  525066 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/functional-035643/id_rsa Username:docker}
	I1212 00:29:37.063000  525066 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1212 00:29:37.063067  525066 command_runner.go:130] > {"iso_version": "v1.37.0-1765151505-21409", "kicbase_version": "v0.0.48-1765275396-22083", "minikube_version": "v1.37.0", "commit": "9f3959633d311997d75aab86f8ff840f224c6486"}
	I1212 00:29:37.063223  525066 ssh_runner.go:195] Run: systemctl --version
	I1212 00:29:37.069375  525066 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1212 00:29:37.069421  525066 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1212 00:29:37.069789  525066 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 00:29:37.107153  525066 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1212 00:29:37.111099  525066 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1212 00:29:37.111476  525066 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 00:29:37.111538  525066 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 00:29:37.119321  525066 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 00:29:37.119346  525066 start.go:496] detecting cgroup driver to use...
	I1212 00:29:37.119377  525066 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 00:29:37.119429  525066 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 00:29:37.134288  525066 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 00:29:37.147114  525066 docker.go:218] disabling cri-docker service (if available) ...
	I1212 00:29:37.147210  525066 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 00:29:37.162260  525066 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 00:29:37.175226  525066 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 00:29:37.287755  525066 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 00:29:37.404746  525066 docker.go:234] disabling docker service ...
	I1212 00:29:37.404828  525066 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 00:29:37.419834  525066 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 00:29:37.433027  525066 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 00:29:37.553874  525066 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 00:29:37.677379  525066 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 00:29:37.696856  525066 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 00:29:37.711415  525066 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1212 00:29:37.712568  525066 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1212 00:29:37.712642  525066 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:29:37.724126  525066 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 00:29:37.724197  525066 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:29:37.733568  525066 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:29:37.743368  525066 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:29:37.752442  525066 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 00:29:37.761570  525066 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:29:37.771444  525066 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:29:37.780014  525066 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:29:37.788901  525066 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 00:29:37.795786  525066 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1212 00:29:37.796743  525066 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 00:29:37.804315  525066 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:29:37.916494  525066 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 00:29:38.098236  525066 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 00:29:38.098362  525066 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 00:29:38.102398  525066 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1212 00:29:38.102430  525066 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1212 00:29:38.102438  525066 command_runner.go:130] > Device: 0,72	Inode: 1642        Links: 1
	I1212 00:29:38.102445  525066 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 00:29:38.102451  525066 command_runner.go:130] > Access: 2025-12-12 00:29:38.034542795 +0000
	I1212 00:29:38.102458  525066 command_runner.go:130] > Modify: 2025-12-12 00:29:38.034542795 +0000
	I1212 00:29:38.102463  525066 command_runner.go:130] > Change: 2025-12-12 00:29:38.034542795 +0000
	I1212 00:29:38.102467  525066 command_runner.go:130] >  Birth: -
	I1212 00:29:38.102500  525066 start.go:564] Will wait 60s for crictl version
	I1212 00:29:38.102554  525066 ssh_runner.go:195] Run: which crictl
	I1212 00:29:38.105961  525066 command_runner.go:130] > /usr/local/bin/crictl
	I1212 00:29:38.106209  525066 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 00:29:38.130147  525066 command_runner.go:130] > Version:  0.1.0
	I1212 00:29:38.130215  525066 command_runner.go:130] > RuntimeName:  cri-o
	I1212 00:29:38.130236  525066 command_runner.go:130] > RuntimeVersion:  1.34.3
	I1212 00:29:38.130255  525066 command_runner.go:130] > RuntimeApiVersion:  v1
	I1212 00:29:38.130299  525066 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1212 00:29:38.130400  525066 ssh_runner.go:195] Run: crio --version
	I1212 00:29:38.156955  525066 command_runner.go:130] > crio version 1.34.3
	I1212 00:29:38.157026  525066 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1212 00:29:38.157055  525066 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1212 00:29:38.157075  525066 command_runner.go:130] >    GitTreeState:   dirty
	I1212 00:29:38.157101  525066 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1212 00:29:38.157125  525066 command_runner.go:130] >    GoVersion:      go1.24.6
	I1212 00:29:38.157142  525066 command_runner.go:130] >    Compiler:       gc
	I1212 00:29:38.157162  525066 command_runner.go:130] >    Platform:       linux/arm64
	I1212 00:29:38.157188  525066 command_runner.go:130] >    Linkmode:       static
	I1212 00:29:38.157205  525066 command_runner.go:130] >    BuildTags:
	I1212 00:29:38.157231  525066 command_runner.go:130] >      static
	I1212 00:29:38.157260  525066 command_runner.go:130] >      netgo
	I1212 00:29:38.157278  525066 command_runner.go:130] >      osusergo
	I1212 00:29:38.157296  525066 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1212 00:29:38.157309  525066 command_runner.go:130] >      seccomp
	I1212 00:29:38.157334  525066 command_runner.go:130] >      apparmor
	I1212 00:29:38.157350  525066 command_runner.go:130] >      selinux
	I1212 00:29:38.157366  525066 command_runner.go:130] >    LDFlags:          unknown
	I1212 00:29:38.157384  525066 command_runner.go:130] >    SeccompEnabled:   true
	I1212 00:29:38.157415  525066 command_runner.go:130] >    AppArmorEnabled:  false
	I1212 00:29:38.159818  525066 ssh_runner.go:195] Run: crio --version
	I1212 00:29:38.187365  525066 command_runner.go:130] > crio version 1.34.3
	I1212 00:29:38.187391  525066 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1212 00:29:38.187398  525066 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1212 00:29:38.187403  525066 command_runner.go:130] >    GitTreeState:   dirty
	I1212 00:29:38.187408  525066 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1212 00:29:38.187414  525066 command_runner.go:130] >    GoVersion:      go1.24.6
	I1212 00:29:38.187418  525066 command_runner.go:130] >    Compiler:       gc
	I1212 00:29:38.187438  525066 command_runner.go:130] >    Platform:       linux/arm64
	I1212 00:29:38.187447  525066 command_runner.go:130] >    Linkmode:       static
	I1212 00:29:38.187451  525066 command_runner.go:130] >    BuildTags:
	I1212 00:29:38.187455  525066 command_runner.go:130] >      static
	I1212 00:29:38.187459  525066 command_runner.go:130] >      netgo
	I1212 00:29:38.187463  525066 command_runner.go:130] >      osusergo
	I1212 00:29:38.187468  525066 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1212 00:29:38.187481  525066 command_runner.go:130] >      seccomp
	I1212 00:29:38.187489  525066 command_runner.go:130] >      apparmor
	I1212 00:29:38.187494  525066 command_runner.go:130] >      selinux
	I1212 00:29:38.187502  525066 command_runner.go:130] >    LDFlags:          unknown
	I1212 00:29:38.187507  525066 command_runner.go:130] >    SeccompEnabled:   true
	I1212 00:29:38.187511  525066 command_runner.go:130] >    AppArmorEnabled:  false
	I1212 00:29:38.193058  525066 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1212 00:29:38.195137  525066 cli_runner.go:164] Run: docker network inspect functional-035643 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 00:29:38.211553  525066 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1212 00:29:38.215227  525066 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1212 00:29:38.215507  525066 kubeadm.go:884] updating cluster {Name:functional-035643 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-035643 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 00:29:38.215633  525066 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1212 00:29:38.215688  525066 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:29:38.248801  525066 command_runner.go:130] > {
	I1212 00:29:38.248822  525066 command_runner.go:130] >   "images":  [
	I1212 00:29:38.248827  525066 command_runner.go:130] >     {
	I1212 00:29:38.248837  525066 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1212 00:29:38.248842  525066 command_runner.go:130] >       "repoTags":  [
	I1212 00:29:38.248851  525066 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1212 00:29:38.248855  525066 command_runner.go:130] >       ],
	I1212 00:29:38.248859  525066 command_runner.go:130] >       "repoDigests":  [
	I1212 00:29:38.248869  525066 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1212 00:29:38.248877  525066 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1212 00:29:38.248880  525066 command_runner.go:130] >       ],
	I1212 00:29:38.248885  525066 command_runner.go:130] >       "size":  "111333938",
	I1212 00:29:38.248893  525066 command_runner.go:130] >       "username":  "",
	I1212 00:29:38.248898  525066 command_runner.go:130] >       "pinned":  false
	I1212 00:29:38.248901  525066 command_runner.go:130] >     },
	I1212 00:29:38.248905  525066 command_runner.go:130] >     {
	I1212 00:29:38.248911  525066 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1212 00:29:38.248926  525066 command_runner.go:130] >       "repoTags":  [
	I1212 00:29:38.248931  525066 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1212 00:29:38.248935  525066 command_runner.go:130] >       ],
	I1212 00:29:38.248939  525066 command_runner.go:130] >       "repoDigests":  [
	I1212 00:29:38.248951  525066 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1212 00:29:38.248960  525066 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1212 00:29:38.248967  525066 command_runner.go:130] >       ],
	I1212 00:29:38.248971  525066 command_runner.go:130] >       "size":  "29037500",
	I1212 00:29:38.248975  525066 command_runner.go:130] >       "username":  "",
	I1212 00:29:38.248983  525066 command_runner.go:130] >       "pinned":  false
	I1212 00:29:38.248987  525066 command_runner.go:130] >     },
	I1212 00:29:38.248990  525066 command_runner.go:130] >     {
	I1212 00:29:38.248998  525066 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1212 00:29:38.249004  525066 command_runner.go:130] >       "repoTags":  [
	I1212 00:29:38.249018  525066 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1212 00:29:38.249026  525066 command_runner.go:130] >       ],
	I1212 00:29:38.249036  525066 command_runner.go:130] >       "repoDigests":  [
	I1212 00:29:38.249044  525066 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1212 00:29:38.249058  525066 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1212 00:29:38.249061  525066 command_runner.go:130] >       ],
	I1212 00:29:38.249065  525066 command_runner.go:130] >       "size":  "74491780",
	I1212 00:29:38.249070  525066 command_runner.go:130] >       "username":  "nonroot",
	I1212 00:29:38.249073  525066 command_runner.go:130] >       "pinned":  false
	I1212 00:29:38.249080  525066 command_runner.go:130] >     },
	I1212 00:29:38.249083  525066 command_runner.go:130] >     {
	I1212 00:29:38.249093  525066 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1212 00:29:38.249104  525066 command_runner.go:130] >       "repoTags":  [
	I1212 00:29:38.249109  525066 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1212 00:29:38.249112  525066 command_runner.go:130] >       ],
	I1212 00:29:38.249116  525066 command_runner.go:130] >       "repoDigests":  [
	I1212 00:29:38.249125  525066 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1212 00:29:38.249135  525066 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1212 00:29:38.249139  525066 command_runner.go:130] >       ],
	I1212 00:29:38.249142  525066 command_runner.go:130] >       "size":  "60857170",
	I1212 00:29:38.249146  525066 command_runner.go:130] >       "uid":  {
	I1212 00:29:38.249150  525066 command_runner.go:130] >         "value":  "0"
	I1212 00:29:38.249153  525066 command_runner.go:130] >       },
	I1212 00:29:38.249166  525066 command_runner.go:130] >       "username":  "",
	I1212 00:29:38.249173  525066 command_runner.go:130] >       "pinned":  false
	I1212 00:29:38.249177  525066 command_runner.go:130] >     },
	I1212 00:29:38.249179  525066 command_runner.go:130] >     {
	I1212 00:29:38.249186  525066 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1212 00:29:38.249192  525066 command_runner.go:130] >       "repoTags":  [
	I1212 00:29:38.249197  525066 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1212 00:29:38.249201  525066 command_runner.go:130] >       ],
	I1212 00:29:38.249205  525066 command_runner.go:130] >       "repoDigests":  [
	I1212 00:29:38.249215  525066 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1212 00:29:38.249230  525066 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1212 00:29:38.249234  525066 command_runner.go:130] >       ],
	I1212 00:29:38.249241  525066 command_runner.go:130] >       "size":  "84949999",
	I1212 00:29:38.249245  525066 command_runner.go:130] >       "uid":  {
	I1212 00:29:38.249249  525066 command_runner.go:130] >         "value":  "0"
	I1212 00:29:38.249254  525066 command_runner.go:130] >       },
	I1212 00:29:38.249259  525066 command_runner.go:130] >       "username":  "",
	I1212 00:29:38.249263  525066 command_runner.go:130] >       "pinned":  false
	I1212 00:29:38.249268  525066 command_runner.go:130] >     },
	I1212 00:29:38.249272  525066 command_runner.go:130] >     {
	I1212 00:29:38.249281  525066 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1212 00:29:38.249294  525066 command_runner.go:130] >       "repoTags":  [
	I1212 00:29:38.249301  525066 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1212 00:29:38.249304  525066 command_runner.go:130] >       ],
	I1212 00:29:38.249308  525066 command_runner.go:130] >       "repoDigests":  [
	I1212 00:29:38.249317  525066 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1212 00:29:38.249326  525066 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1212 00:29:38.249337  525066 command_runner.go:130] >       ],
	I1212 00:29:38.249341  525066 command_runner.go:130] >       "size":  "72170325",
	I1212 00:29:38.249345  525066 command_runner.go:130] >       "uid":  {
	I1212 00:29:38.249348  525066 command_runner.go:130] >         "value":  "0"
	I1212 00:29:38.249356  525066 command_runner.go:130] >       },
	I1212 00:29:38.249364  525066 command_runner.go:130] >       "username":  "",
	I1212 00:29:38.249367  525066 command_runner.go:130] >       "pinned":  false
	I1212 00:29:38.249371  525066 command_runner.go:130] >     },
	I1212 00:29:38.249374  525066 command_runner.go:130] >     {
	I1212 00:29:38.249381  525066 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1212 00:29:38.249386  525066 command_runner.go:130] >       "repoTags":  [
	I1212 00:29:38.249391  525066 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1212 00:29:38.249394  525066 command_runner.go:130] >       ],
	I1212 00:29:38.249398  525066 command_runner.go:130] >       "repoDigests":  [
	I1212 00:29:38.249409  525066 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1212 00:29:38.249426  525066 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1212 00:29:38.249434  525066 command_runner.go:130] >       ],
	I1212 00:29:38.249438  525066 command_runner.go:130] >       "size":  "74106775",
	I1212 00:29:38.249450  525066 command_runner.go:130] >       "username":  "",
	I1212 00:29:38.249454  525066 command_runner.go:130] >       "pinned":  false
	I1212 00:29:38.249458  525066 command_runner.go:130] >     },
	I1212 00:29:38.249461  525066 command_runner.go:130] >     {
	I1212 00:29:38.249468  525066 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1212 00:29:38.249472  525066 command_runner.go:130] >       "repoTags":  [
	I1212 00:29:38.249481  525066 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1212 00:29:38.249484  525066 command_runner.go:130] >       ],
	I1212 00:29:38.249488  525066 command_runner.go:130] >       "repoDigests":  [
	I1212 00:29:38.249502  525066 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1212 00:29:38.249522  525066 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1212 00:29:38.249528  525066 command_runner.go:130] >       ],
	I1212 00:29:38.249532  525066 command_runner.go:130] >       "size":  "49822549",
	I1212 00:29:38.249535  525066 command_runner.go:130] >       "uid":  {
	I1212 00:29:38.249539  525066 command_runner.go:130] >         "value":  "0"
	I1212 00:29:38.249549  525066 command_runner.go:130] >       },
	I1212 00:29:38.249553  525066 command_runner.go:130] >       "username":  "",
	I1212 00:29:38.249556  525066 command_runner.go:130] >       "pinned":  false
	I1212 00:29:38.249559  525066 command_runner.go:130] >     },
	I1212 00:29:38.249562  525066 command_runner.go:130] >     {
	I1212 00:29:38.249568  525066 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1212 00:29:38.249572  525066 command_runner.go:130] >       "repoTags":  [
	I1212 00:29:38.249576  525066 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1212 00:29:38.249581  525066 command_runner.go:130] >       ],
	I1212 00:29:38.249586  525066 command_runner.go:130] >       "repoDigests":  [
	I1212 00:29:38.249598  525066 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1212 00:29:38.249606  525066 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1212 00:29:38.249613  525066 command_runner.go:130] >       ],
	I1212 00:29:38.249617  525066 command_runner.go:130] >       "size":  "519884",
	I1212 00:29:38.249621  525066 command_runner.go:130] >       "uid":  {
	I1212 00:29:38.249626  525066 command_runner.go:130] >         "value":  "65535"
	I1212 00:29:38.249633  525066 command_runner.go:130] >       },
	I1212 00:29:38.249642  525066 command_runner.go:130] >       "username":  "",
	I1212 00:29:38.249646  525066 command_runner.go:130] >       "pinned":  true
	I1212 00:29:38.249649  525066 command_runner.go:130] >     }
	I1212 00:29:38.249653  525066 command_runner.go:130] >   ]
	I1212 00:29:38.249656  525066 command_runner.go:130] > }
	I1212 00:29:38.252138  525066 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 00:29:38.252165  525066 crio.go:433] Images already preloaded, skipping extraction
	I1212 00:29:38.252226  525066 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:29:38.276626  525066 command_runner.go:130] > {
	I1212 00:29:38.276647  525066 command_runner.go:130] >   "images":  [
	I1212 00:29:38.276651  525066 command_runner.go:130] >     {
	I1212 00:29:38.276660  525066 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1212 00:29:38.276674  525066 command_runner.go:130] >       "repoTags":  [
	I1212 00:29:38.276681  525066 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1212 00:29:38.276684  525066 command_runner.go:130] >       ],
	I1212 00:29:38.276690  525066 command_runner.go:130] >       "repoDigests":  [
	I1212 00:29:38.276700  525066 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1212 00:29:38.276711  525066 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1212 00:29:38.276717  525066 command_runner.go:130] >       ],
	I1212 00:29:38.276721  525066 command_runner.go:130] >       "size":  "111333938",
	I1212 00:29:38.276725  525066 command_runner.go:130] >       "username":  "",
	I1212 00:29:38.276731  525066 command_runner.go:130] >       "pinned":  false
	I1212 00:29:38.276737  525066 command_runner.go:130] >     },
	I1212 00:29:38.276740  525066 command_runner.go:130] >     {
	I1212 00:29:38.276747  525066 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1212 00:29:38.276754  525066 command_runner.go:130] >       "repoTags":  [
	I1212 00:29:38.276760  525066 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1212 00:29:38.276767  525066 command_runner.go:130] >       ],
	I1212 00:29:38.276771  525066 command_runner.go:130] >       "repoDigests":  [
	I1212 00:29:38.276781  525066 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1212 00:29:38.276790  525066 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1212 00:29:38.276794  525066 command_runner.go:130] >       ],
	I1212 00:29:38.276799  525066 command_runner.go:130] >       "size":  "29037500",
	I1212 00:29:38.276807  525066 command_runner.go:130] >       "username":  "",
	I1212 00:29:38.276815  525066 command_runner.go:130] >       "pinned":  false
	I1212 00:29:38.276822  525066 command_runner.go:130] >     },
	I1212 00:29:38.276826  525066 command_runner.go:130] >     {
	I1212 00:29:38.276833  525066 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1212 00:29:38.276839  525066 command_runner.go:130] >       "repoTags":  [
	I1212 00:29:38.276845  525066 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1212 00:29:38.276850  525066 command_runner.go:130] >       ],
	I1212 00:29:38.276854  525066 command_runner.go:130] >       "repoDigests":  [
	I1212 00:29:38.276868  525066 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1212 00:29:38.276876  525066 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1212 00:29:38.276879  525066 command_runner.go:130] >       ],
	I1212 00:29:38.276883  525066 command_runner.go:130] >       "size":  "74491780",
	I1212 00:29:38.276891  525066 command_runner.go:130] >       "username":  "nonroot",
	I1212 00:29:38.276895  525066 command_runner.go:130] >       "pinned":  false
	I1212 00:29:38.276901  525066 command_runner.go:130] >     },
	I1212 00:29:38.276904  525066 command_runner.go:130] >     {
	I1212 00:29:38.276911  525066 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1212 00:29:38.276918  525066 command_runner.go:130] >       "repoTags":  [
	I1212 00:29:38.276922  525066 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1212 00:29:38.276925  525066 command_runner.go:130] >       ],
	I1212 00:29:38.276930  525066 command_runner.go:130] >       "repoDigests":  [
	I1212 00:29:38.276940  525066 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1212 00:29:38.276951  525066 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1212 00:29:38.276954  525066 command_runner.go:130] >       ],
	I1212 00:29:38.276973  525066 command_runner.go:130] >       "size":  "60857170",
	I1212 00:29:38.276977  525066 command_runner.go:130] >       "uid":  {
	I1212 00:29:38.276980  525066 command_runner.go:130] >         "value":  "0"
	I1212 00:29:38.276983  525066 command_runner.go:130] >       },
	I1212 00:29:38.276994  525066 command_runner.go:130] >       "username":  "",
	I1212 00:29:38.277001  525066 command_runner.go:130] >       "pinned":  false
	I1212 00:29:38.277004  525066 command_runner.go:130] >     },
	I1212 00:29:38.277007  525066 command_runner.go:130] >     {
	I1212 00:29:38.277014  525066 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1212 00:29:38.277019  525066 command_runner.go:130] >       "repoTags":  [
	I1212 00:29:38.277032  525066 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1212 00:29:38.277039  525066 command_runner.go:130] >       ],
	I1212 00:29:38.277043  525066 command_runner.go:130] >       "repoDigests":  [
	I1212 00:29:38.277051  525066 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1212 00:29:38.277066  525066 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1212 00:29:38.277070  525066 command_runner.go:130] >       ],
	I1212 00:29:38.277074  525066 command_runner.go:130] >       "size":  "84949999",
	I1212 00:29:38.277078  525066 command_runner.go:130] >       "uid":  {
	I1212 00:29:38.277086  525066 command_runner.go:130] >         "value":  "0"
	I1212 00:29:38.277089  525066 command_runner.go:130] >       },
	I1212 00:29:38.277093  525066 command_runner.go:130] >       "username":  "",
	I1212 00:29:38.277101  525066 command_runner.go:130] >       "pinned":  false
	I1212 00:29:38.277104  525066 command_runner.go:130] >     },
	I1212 00:29:38.277110  525066 command_runner.go:130] >     {
	I1212 00:29:38.277117  525066 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1212 00:29:38.277123  525066 command_runner.go:130] >       "repoTags":  [
	I1212 00:29:38.277129  525066 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1212 00:29:38.277132  525066 command_runner.go:130] >       ],
	I1212 00:29:38.277136  525066 command_runner.go:130] >       "repoDigests":  [
	I1212 00:29:38.277145  525066 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1212 00:29:38.277157  525066 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1212 00:29:38.277160  525066 command_runner.go:130] >       ],
	I1212 00:29:38.277164  525066 command_runner.go:130] >       "size":  "72170325",
	I1212 00:29:38.277167  525066 command_runner.go:130] >       "uid":  {
	I1212 00:29:38.277171  525066 command_runner.go:130] >         "value":  "0"
	I1212 00:29:38.277175  525066 command_runner.go:130] >       },
	I1212 00:29:38.277181  525066 command_runner.go:130] >       "username":  "",
	I1212 00:29:38.277186  525066 command_runner.go:130] >       "pinned":  false
	I1212 00:29:38.277191  525066 command_runner.go:130] >     },
	I1212 00:29:38.277194  525066 command_runner.go:130] >     {
	I1212 00:29:38.277203  525066 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1212 00:29:38.277209  525066 command_runner.go:130] >       "repoTags":  [
	I1212 00:29:38.277215  525066 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1212 00:29:38.277225  525066 command_runner.go:130] >       ],
	I1212 00:29:38.277229  525066 command_runner.go:130] >       "repoDigests":  [
	I1212 00:29:38.277238  525066 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1212 00:29:38.277251  525066 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1212 00:29:38.277255  525066 command_runner.go:130] >       ],
	I1212 00:29:38.277259  525066 command_runner.go:130] >       "size":  "74106775",
	I1212 00:29:38.277263  525066 command_runner.go:130] >       "username":  "",
	I1212 00:29:38.277269  525066 command_runner.go:130] >       "pinned":  false
	I1212 00:29:38.277273  525066 command_runner.go:130] >     },
	I1212 00:29:38.277276  525066 command_runner.go:130] >     {
	I1212 00:29:38.277283  525066 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1212 00:29:38.277289  525066 command_runner.go:130] >       "repoTags":  [
	I1212 00:29:38.277294  525066 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1212 00:29:38.277297  525066 command_runner.go:130] >       ],
	I1212 00:29:38.277301  525066 command_runner.go:130] >       "repoDigests":  [
	I1212 00:29:38.277309  525066 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1212 00:29:38.277326  525066 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1212 00:29:38.277330  525066 command_runner.go:130] >       ],
	I1212 00:29:38.277334  525066 command_runner.go:130] >       "size":  "49822549",
	I1212 00:29:38.277340  525066 command_runner.go:130] >       "uid":  {
	I1212 00:29:38.277344  525066 command_runner.go:130] >         "value":  "0"
	I1212 00:29:38.277347  525066 command_runner.go:130] >       },
	I1212 00:29:38.277351  525066 command_runner.go:130] >       "username":  "",
	I1212 00:29:38.277357  525066 command_runner.go:130] >       "pinned":  false
	I1212 00:29:38.277360  525066 command_runner.go:130] >     },
	I1212 00:29:38.277364  525066 command_runner.go:130] >     {
	I1212 00:29:38.277373  525066 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1212 00:29:38.277377  525066 command_runner.go:130] >       "repoTags":  [
	I1212 00:29:38.277390  525066 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1212 00:29:38.277394  525066 command_runner.go:130] >       ],
	I1212 00:29:38.277397  525066 command_runner.go:130] >       "repoDigests":  [
	I1212 00:29:38.277405  525066 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1212 00:29:38.277416  525066 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1212 00:29:38.277424  525066 command_runner.go:130] >       ],
	I1212 00:29:38.277429  525066 command_runner.go:130] >       "size":  "519884",
	I1212 00:29:38.277432  525066 command_runner.go:130] >       "uid":  {
	I1212 00:29:38.277438  525066 command_runner.go:130] >         "value":  "65535"
	I1212 00:29:38.277442  525066 command_runner.go:130] >       },
	I1212 00:29:38.277447  525066 command_runner.go:130] >       "username":  "",
	I1212 00:29:38.277453  525066 command_runner.go:130] >       "pinned":  true
	I1212 00:29:38.277456  525066 command_runner.go:130] >     }
	I1212 00:29:38.277459  525066 command_runner.go:130] >   ]
	I1212 00:29:38.277464  525066 command_runner.go:130] > }
	I1212 00:29:38.282583  525066 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 00:29:38.282606  525066 cache_images.go:86] Images are preloaded, skipping loading
	I1212 00:29:38.282613  525066 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1212 00:29:38.282744  525066 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-035643 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-035643 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 00:29:38.282831  525066 ssh_runner.go:195] Run: crio config
	I1212 00:29:38.339065  525066 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1212 00:29:38.339140  525066 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1212 00:29:38.339162  525066 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1212 00:29:38.339180  525066 command_runner.go:130] > #
	I1212 00:29:38.339218  525066 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1212 00:29:38.339243  525066 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1212 00:29:38.339261  525066 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1212 00:29:38.339304  525066 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1212 00:29:38.339327  525066 command_runner.go:130] > # reload'.
	I1212 00:29:38.339346  525066 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1212 00:29:38.339379  525066 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1212 00:29:38.339402  525066 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1212 00:29:38.339422  525066 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1212 00:29:38.339436  525066 command_runner.go:130] > [crio]
	I1212 00:29:38.339466  525066 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1212 00:29:38.339488  525066 command_runner.go:130] > # containers images, in this directory.
	I1212 00:29:38.339510  525066 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1212 00:29:38.339541  525066 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1212 00:29:38.339562  525066 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1212 00:29:38.339583  525066 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1212 00:29:38.339600  525066 command_runner.go:130] > # imagestore = ""
	I1212 00:29:38.339629  525066 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1212 00:29:38.339652  525066 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1212 00:29:38.339676  525066 command_runner.go:130] > # storage_driver = "overlay"
	I1212 00:29:38.339707  525066 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1212 00:29:38.339730  525066 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1212 00:29:38.339746  525066 command_runner.go:130] > # storage_option = [
	I1212 00:29:38.339762  525066 command_runner.go:130] > # ]
	I1212 00:29:38.339794  525066 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1212 00:29:38.339818  525066 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1212 00:29:38.339834  525066 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1212 00:29:38.339852  525066 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1212 00:29:38.339890  525066 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1212 00:29:38.339907  525066 command_runner.go:130] > # always happen on a node reboot
	I1212 00:29:38.339923  525066 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1212 00:29:38.339959  525066 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1212 00:29:38.339984  525066 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1212 00:29:38.340001  525066 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1212 00:29:38.340029  525066 command_runner.go:130] > # version_file_persist = ""
	I1212 00:29:38.340052  525066 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1212 00:29:38.340072  525066 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1212 00:29:38.340087  525066 command_runner.go:130] > # internal_wipe = true
	I1212 00:29:38.340117  525066 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1212 00:29:38.340140  525066 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1212 00:29:38.340157  525066 command_runner.go:130] > # internal_repair = true
	I1212 00:29:38.340175  525066 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1212 00:29:38.340208  525066 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1212 00:29:38.340228  525066 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1212 00:29:38.340246  525066 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1212 00:29:38.340277  525066 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1212 00:29:38.340300  525066 command_runner.go:130] > [crio.api]
	I1212 00:29:38.340319  525066 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1212 00:29:38.340336  525066 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1212 00:29:38.340365  525066 command_runner.go:130] > # IP address on which the stream server will listen.
	I1212 00:29:38.340387  525066 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1212 00:29:38.340407  525066 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1212 00:29:38.340447  525066 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1212 00:29:38.340822  525066 command_runner.go:130] > # stream_port = "0"
	I1212 00:29:38.340835  525066 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1212 00:29:38.341007  525066 command_runner.go:130] > # stream_enable_tls = false
	I1212 00:29:38.341018  525066 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1212 00:29:38.341210  525066 command_runner.go:130] > # stream_idle_timeout = ""
	I1212 00:29:38.341221  525066 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1212 00:29:38.341229  525066 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1212 00:29:38.341233  525066 command_runner.go:130] > # stream_tls_cert = ""
	I1212 00:29:38.341239  525066 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1212 00:29:38.341245  525066 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1212 00:29:38.341249  525066 command_runner.go:130] > # stream_tls_key = ""
	I1212 00:29:38.341255  525066 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1212 00:29:38.341261  525066 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1212 00:29:38.341272  525066 command_runner.go:130] > # automatically pick up the changes.
	I1212 00:29:38.341446  525066 command_runner.go:130] > # stream_tls_ca = ""
	I1212 00:29:38.341475  525066 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1212 00:29:38.341751  525066 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1212 00:29:38.341765  525066 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1212 00:29:38.341770  525066 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1212 00:29:38.341777  525066 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1212 00:29:38.341782  525066 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1212 00:29:38.341786  525066 command_runner.go:130] > [crio.runtime]
	I1212 00:29:38.341792  525066 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1212 00:29:38.341798  525066 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1212 00:29:38.341801  525066 command_runner.go:130] > # "nofile=1024:2048"
	I1212 00:29:38.341807  525066 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1212 00:29:38.341811  525066 command_runner.go:130] > # default_ulimits = [
	I1212 00:29:38.341814  525066 command_runner.go:130] > # ]
	I1212 00:29:38.341821  525066 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1212 00:29:38.341824  525066 command_runner.go:130] > # no_pivot = false
	I1212 00:29:38.341830  525066 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1212 00:29:38.341836  525066 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1212 00:29:38.341841  525066 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1212 00:29:38.341847  525066 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1212 00:29:38.341851  525066 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1212 00:29:38.341858  525066 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1212 00:29:38.342059  525066 command_runner.go:130] > # conmon = ""
	I1212 00:29:38.342069  525066 command_runner.go:130] > # Cgroup setting for conmon
	I1212 00:29:38.342077  525066 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1212 00:29:38.342081  525066 command_runner.go:130] > conmon_cgroup = "pod"
	I1212 00:29:38.342087  525066 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1212 00:29:38.342093  525066 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1212 00:29:38.342100  525066 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1212 00:29:38.342293  525066 command_runner.go:130] > # conmon_env = [
	I1212 00:29:38.342301  525066 command_runner.go:130] > # ]
	I1212 00:29:38.342307  525066 command_runner.go:130] > # Additional environment variables to set for all the
	I1212 00:29:38.342312  525066 command_runner.go:130] > # containers. These are overridden if set in the
	I1212 00:29:38.342318  525066 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1212 00:29:38.342321  525066 command_runner.go:130] > # default_env = [
	I1212 00:29:38.342325  525066 command_runner.go:130] > # ]
	I1212 00:29:38.342330  525066 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1212 00:29:38.342338  525066 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1212 00:29:38.342531  525066 command_runner.go:130] > # selinux = false
	I1212 00:29:38.342542  525066 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1212 00:29:38.342551  525066 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1212 00:29:38.342556  525066 command_runner.go:130] > # This option supports live configuration reload.
	I1212 00:29:38.342765  525066 command_runner.go:130] > # seccomp_profile = ""
	I1212 00:29:38.342777  525066 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1212 00:29:38.342783  525066 command_runner.go:130] > # This option supports live configuration reload.
	I1212 00:29:38.342787  525066 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1212 00:29:38.342804  525066 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1212 00:29:38.342810  525066 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1212 00:29:38.342817  525066 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1212 00:29:38.342823  525066 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1212 00:29:38.342828  525066 command_runner.go:130] > # This option supports live configuration reload.
	I1212 00:29:38.342833  525066 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1212 00:29:38.342838  525066 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1212 00:29:38.342842  525066 command_runner.go:130] > # the cgroup blockio controller.
	I1212 00:29:38.343029  525066 command_runner.go:130] > # blockio_config_file = ""
	I1212 00:29:38.343040  525066 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1212 00:29:38.343044  525066 command_runner.go:130] > # blockio parameters.
	I1212 00:29:38.343244  525066 command_runner.go:130] > # blockio_reload = false
	I1212 00:29:38.343255  525066 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1212 00:29:38.343260  525066 command_runner.go:130] > # irqbalance daemon.
	I1212 00:29:38.343265  525066 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1212 00:29:38.343271  525066 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1212 00:29:38.343278  525066 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1212 00:29:38.343285  525066 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1212 00:29:38.343472  525066 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1212 00:29:38.343488  525066 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1212 00:29:38.343494  525066 command_runner.go:130] > # This option supports live configuration reload.
	I1212 00:29:38.343668  525066 command_runner.go:130] > # rdt_config_file = ""
	I1212 00:29:38.343679  525066 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1212 00:29:38.343683  525066 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1212 00:29:38.343690  525066 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1212 00:29:38.343893  525066 command_runner.go:130] > # separate_pull_cgroup = ""
	I1212 00:29:38.343905  525066 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1212 00:29:38.343912  525066 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1212 00:29:38.343920  525066 command_runner.go:130] > # will be added.
	I1212 00:29:38.343925  525066 command_runner.go:130] > # default_capabilities = [
	I1212 00:29:38.344172  525066 command_runner.go:130] > # 	"CHOWN",
	I1212 00:29:38.344180  525066 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1212 00:29:38.344184  525066 command_runner.go:130] > # 	"FSETID",
	I1212 00:29:38.344187  525066 command_runner.go:130] > # 	"FOWNER",
	I1212 00:29:38.344191  525066 command_runner.go:130] > # 	"SETGID",
	I1212 00:29:38.344194  525066 command_runner.go:130] > # 	"SETUID",
	I1212 00:29:38.344217  525066 command_runner.go:130] > # 	"SETPCAP",
	I1212 00:29:38.344397  525066 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1212 00:29:38.344405  525066 command_runner.go:130] > # 	"KILL",
	I1212 00:29:38.344408  525066 command_runner.go:130] > # ]
	I1212 00:29:38.344417  525066 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1212 00:29:38.344424  525066 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1212 00:29:38.344614  525066 command_runner.go:130] > # add_inheritable_capabilities = false
	I1212 00:29:38.344634  525066 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1212 00:29:38.344641  525066 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1212 00:29:38.344645  525066 command_runner.go:130] > default_sysctls = [
	I1212 00:29:38.344818  525066 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1212 00:29:38.344834  525066 command_runner.go:130] > ]
	I1212 00:29:38.344839  525066 command_runner.go:130] > # List of devices on the host that a
	I1212 00:29:38.344846  525066 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1212 00:29:38.344850  525066 command_runner.go:130] > # allowed_devices = [
	I1212 00:29:38.345064  525066 command_runner.go:130] > # 	"/dev/fuse",
	I1212 00:29:38.345072  525066 command_runner.go:130] > # 	"/dev/net/tun",
	I1212 00:29:38.345076  525066 command_runner.go:130] > # ]
	I1212 00:29:38.345089  525066 command_runner.go:130] > # List of additional devices. specified as
	I1212 00:29:38.345098  525066 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1212 00:29:38.345141  525066 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1212 00:29:38.345151  525066 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1212 00:29:38.345155  525066 command_runner.go:130] > # additional_devices = [
	I1212 00:29:38.345354  525066 command_runner.go:130] > # ]
	I1212 00:29:38.345364  525066 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1212 00:29:38.345368  525066 command_runner.go:130] > # cdi_spec_dirs = [
	I1212 00:29:38.345371  525066 command_runner.go:130] > # 	"/etc/cdi",
	I1212 00:29:38.345585  525066 command_runner.go:130] > # 	"/var/run/cdi",
	I1212 00:29:38.345593  525066 command_runner.go:130] > # ]
	I1212 00:29:38.345600  525066 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1212 00:29:38.345606  525066 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1212 00:29:38.345609  525066 command_runner.go:130] > # Defaults to false.
	I1212 00:29:38.345614  525066 command_runner.go:130] > # device_ownership_from_security_context = false
	I1212 00:29:38.345652  525066 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1212 00:29:38.345661  525066 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1212 00:29:38.345665  525066 command_runner.go:130] > # hooks_dir = [
	I1212 00:29:38.345877  525066 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1212 00:29:38.345885  525066 command_runner.go:130] > # ]
	I1212 00:29:38.345892  525066 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1212 00:29:38.345899  525066 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1212 00:29:38.345904  525066 command_runner.go:130] > # its default mounts from the following two files:
	I1212 00:29:38.345907  525066 command_runner.go:130] > #
	I1212 00:29:38.345914  525066 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1212 00:29:38.345957  525066 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1212 00:29:38.345963  525066 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1212 00:29:38.345966  525066 command_runner.go:130] > #
	I1212 00:29:38.345972  525066 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1212 00:29:38.345979  525066 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1212 00:29:38.345986  525066 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1212 00:29:38.345991  525066 command_runner.go:130] > #      only add mounts it finds in this file.
	I1212 00:29:38.346020  525066 command_runner.go:130] > #
	I1212 00:29:38.346210  525066 command_runner.go:130] > # default_mounts_file = ""
	I1212 00:29:38.346221  525066 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1212 00:29:38.346228  525066 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1212 00:29:38.346444  525066 command_runner.go:130] > # pids_limit = -1
	I1212 00:29:38.346456  525066 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1212 00:29:38.346463  525066 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1212 00:29:38.346469  525066 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1212 00:29:38.346478  525066 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1212 00:29:38.346512  525066 command_runner.go:130] > # log_size_max = -1
	I1212 00:29:38.346523  525066 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1212 00:29:38.346724  525066 command_runner.go:130] > # log_to_journald = false
	I1212 00:29:38.346736  525066 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1212 00:29:38.346742  525066 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1212 00:29:38.346747  525066 command_runner.go:130] > # Path to directory for container attach sockets.
	I1212 00:29:38.347111  525066 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1212 00:29:38.347122  525066 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1212 00:29:38.347127  525066 command_runner.go:130] > # bind_mount_prefix = ""
	I1212 00:29:38.347132  525066 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1212 00:29:38.347136  525066 command_runner.go:130] > # read_only = false
	I1212 00:29:38.347142  525066 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1212 00:29:38.347149  525066 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1212 00:29:38.347186  525066 command_runner.go:130] > # live configuration reload.
	I1212 00:29:38.347359  525066 command_runner.go:130] > # log_level = "info"
	I1212 00:29:38.347376  525066 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1212 00:29:38.347381  525066 command_runner.go:130] > # This option supports live configuration reload.
	I1212 00:29:38.347597  525066 command_runner.go:130] > # log_filter = ""
	I1212 00:29:38.347608  525066 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1212 00:29:38.347615  525066 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1212 00:29:38.347619  525066 command_runner.go:130] > # separated by comma.
	I1212 00:29:38.347671  525066 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1212 00:29:38.347679  525066 command_runner.go:130] > # uid_mappings = ""
	I1212 00:29:38.347686  525066 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1212 00:29:38.347692  525066 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1212 00:29:38.347696  525066 command_runner.go:130] > # separated by comma.
	I1212 00:29:38.347704  525066 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1212 00:29:38.347707  525066 command_runner.go:130] > # gid_mappings = ""
	I1212 00:29:38.347714  525066 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1212 00:29:38.347746  525066 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1212 00:29:38.347757  525066 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1212 00:29:38.347765  525066 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1212 00:29:38.347769  525066 command_runner.go:130] > # minimum_mappable_uid = -1
	I1212 00:29:38.347775  525066 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1212 00:29:38.347781  525066 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1212 00:29:38.347787  525066 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1212 00:29:38.347822  525066 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1212 00:29:38.348158  525066 command_runner.go:130] > # minimum_mappable_gid = -1
	I1212 00:29:38.348170  525066 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1212 00:29:38.348176  525066 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1212 00:29:38.348182  525066 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1212 00:29:38.348415  525066 command_runner.go:130] > # ctr_stop_timeout = 30
	I1212 00:29:38.348427  525066 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1212 00:29:38.348433  525066 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1212 00:29:38.348438  525066 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1212 00:29:38.348442  525066 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1212 00:29:38.348641  525066 command_runner.go:130] > # drop_infra_ctr = true
	I1212 00:29:38.348653  525066 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1212 00:29:38.348659  525066 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1212 00:29:38.348666  525066 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1212 00:29:38.348674  525066 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1212 00:29:38.348712  525066 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1212 00:29:38.348725  525066 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1212 00:29:38.348731  525066 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1212 00:29:38.348736  525066 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1212 00:29:38.348935  525066 command_runner.go:130] > # shared_cpuset = ""
	I1212 00:29:38.348946  525066 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1212 00:29:38.348952  525066 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1212 00:29:38.348956  525066 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1212 00:29:38.348964  525066 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1212 00:29:38.349178  525066 command_runner.go:130] > # pinns_path = ""
	I1212 00:29:38.349189  525066 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1212 00:29:38.349195  525066 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1212 00:29:38.349199  525066 command_runner.go:130] > # enable_criu_support = true
	I1212 00:29:38.349214  525066 command_runner.go:130] > # Enable/disable the generation of the container,
	I1212 00:29:38.349253  525066 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1212 00:29:38.349272  525066 command_runner.go:130] > # enable_pod_events = false
	I1212 00:29:38.349291  525066 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1212 00:29:38.349322  525066 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1212 00:29:38.349505  525066 command_runner.go:130] > # default_runtime = "crun"
	I1212 00:29:38.349536  525066 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1212 00:29:38.349573  525066 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1212 00:29:38.349601  525066 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1212 00:29:38.349618  525066 command_runner.go:130] > # creation as a file is not desired either.
	I1212 00:29:38.349653  525066 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1212 00:29:38.349674  525066 command_runner.go:130] > # the hostname is being managed dynamically.
	I1212 00:29:38.349690  525066 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1212 00:29:38.349956  525066 command_runner.go:130] > # ]
	I1212 00:29:38.350003  525066 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1212 00:29:38.350025  525066 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1212 00:29:38.350043  525066 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1212 00:29:38.350074  525066 command_runner.go:130] > # Each entry in the table should follow the format:
	I1212 00:29:38.350093  525066 command_runner.go:130] > #
	I1212 00:29:38.350110  525066 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1212 00:29:38.350127  525066 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1212 00:29:38.350158  525066 command_runner.go:130] > # runtime_type = "oci"
	I1212 00:29:38.350179  525066 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1212 00:29:38.350201  525066 command_runner.go:130] > # inherit_default_runtime = false
	I1212 00:29:38.350218  525066 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1212 00:29:38.350253  525066 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1212 00:29:38.350271  525066 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1212 00:29:38.350287  525066 command_runner.go:130] > # monitor_env = []
	I1212 00:29:38.350317  525066 command_runner.go:130] > # privileged_without_host_devices = false
	I1212 00:29:38.350339  525066 command_runner.go:130] > # allowed_annotations = []
	I1212 00:29:38.350358  525066 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1212 00:29:38.350372  525066 command_runner.go:130] > # no_sync_log = false
	I1212 00:29:38.350402  525066 command_runner.go:130] > # default_annotations = {}
	I1212 00:29:38.350419  525066 command_runner.go:130] > # stream_websockets = false
	I1212 00:29:38.350436  525066 command_runner.go:130] > # seccomp_profile = ""
	I1212 00:29:38.350499  525066 command_runner.go:130] > # Where:
	I1212 00:29:38.350529  525066 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1212 00:29:38.350561  525066 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1212 00:29:38.350588  525066 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1212 00:29:38.350607  525066 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1212 00:29:38.350635  525066 command_runner.go:130] > #   in $PATH.
	I1212 00:29:38.350670  525066 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1212 00:29:38.350713  525066 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1212 00:29:38.350735  525066 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1212 00:29:38.350750  525066 command_runner.go:130] > #   state.
	I1212 00:29:38.350780  525066 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1212 00:29:38.350928  525066 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1212 00:29:38.351028  525066 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1212 00:29:38.351155  525066 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1212 00:29:38.351251  525066 command_runner.go:130] > #   the values from the default runtime on load time.
	I1212 00:29:38.351344  525066 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1212 00:29:38.351530  525066 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1212 00:29:38.351817  525066 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1212 00:29:38.352119  525066 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1212 00:29:38.352319  525066 command_runner.go:130] > #   The currently recognized values are:
	I1212 00:29:38.352557  525066 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1212 00:29:38.352766  525066 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1212 00:29:38.352929  525066 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1212 00:29:38.353036  525066 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1212 00:29:38.353153  525066 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1212 00:29:38.353519  525066 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1212 00:29:38.353569  525066 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1212 00:29:38.353580  525066 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1212 00:29:38.353587  525066 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1212 00:29:38.353593  525066 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1212 00:29:38.353637  525066 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1212 00:29:38.353645  525066 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1212 00:29:38.353652  525066 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1212 00:29:38.353658  525066 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1212 00:29:38.353664  525066 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1212 00:29:38.353679  525066 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1212 00:29:38.353695  525066 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1212 00:29:38.353699  525066 command_runner.go:130] > #   deprecated option "conmon".
	I1212 00:29:38.353706  525066 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1212 00:29:38.353766  525066 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1212 00:29:38.353805  525066 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1212 00:29:38.353814  525066 command_runner.go:130] > #   should be moved to the container's cgroup
	I1212 00:29:38.353822  525066 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1212 00:29:38.353826  525066 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1212 00:29:38.353834  525066 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1212 00:29:38.353838  525066 command_runner.go:130] > #   conmon-rs by using:
	I1212 00:29:38.353893  525066 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1212 00:29:38.353903  525066 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1212 00:29:38.353947  525066 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1212 00:29:38.353958  525066 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1212 00:29:38.353963  525066 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1212 00:29:38.353971  525066 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1212 00:29:38.353979  525066 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1212 00:29:38.353984  525066 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1212 00:29:38.353992  525066 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1212 00:29:38.354039  525066 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1212 00:29:38.354048  525066 command_runner.go:130] > #   when a machine crash happens.
	I1212 00:29:38.354056  525066 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1212 00:29:38.354064  525066 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1212 00:29:38.354100  525066 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1212 00:29:38.354106  525066 command_runner.go:130] > #   seccomp profile for the runtime.
	I1212 00:29:38.354113  525066 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1212 00:29:38.354120  525066 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1212 00:29:38.354123  525066 command_runner.go:130] > #
	I1212 00:29:38.354169  525066 command_runner.go:130] > # Using the seccomp notifier feature:
	I1212 00:29:38.354175  525066 command_runner.go:130] > #
	I1212 00:29:38.354188  525066 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1212 00:29:38.354195  525066 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1212 00:29:38.354198  525066 command_runner.go:130] > #
	I1212 00:29:38.354204  525066 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1212 00:29:38.354210  525066 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1212 00:29:38.354212  525066 command_runner.go:130] > #
	I1212 00:29:38.354258  525066 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1212 00:29:38.354270  525066 command_runner.go:130] > # feature.
	I1212 00:29:38.354273  525066 command_runner.go:130] > #
	I1212 00:29:38.354279  525066 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1212 00:29:38.354286  525066 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1212 00:29:38.354292  525066 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1212 00:29:38.354298  525066 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1212 00:29:38.354350  525066 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1212 00:29:38.354355  525066 command_runner.go:130] > #
	I1212 00:29:38.354362  525066 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1212 00:29:38.354402  525066 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1212 00:29:38.354408  525066 command_runner.go:130] > #
	I1212 00:29:38.354414  525066 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1212 00:29:38.354420  525066 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1212 00:29:38.354423  525066 command_runner.go:130] > #
	I1212 00:29:38.354429  525066 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1212 00:29:38.354471  525066 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1212 00:29:38.354477  525066 command_runner.go:130] > # limitation.
	I1212 00:29:38.354481  525066 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1212 00:29:38.354485  525066 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1212 00:29:38.354492  525066 command_runner.go:130] > runtime_type = ""
	I1212 00:29:38.354498  525066 command_runner.go:130] > runtime_root = "/run/crun"
	I1212 00:29:38.354502  525066 command_runner.go:130] > inherit_default_runtime = false
	I1212 00:29:38.354538  525066 command_runner.go:130] > runtime_config_path = ""
	I1212 00:29:38.354545  525066 command_runner.go:130] > container_min_memory = ""
	I1212 00:29:38.354550  525066 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1212 00:29:38.354554  525066 command_runner.go:130] > monitor_cgroup = "pod"
	I1212 00:29:38.354558  525066 command_runner.go:130] > monitor_exec_cgroup = ""
	I1212 00:29:38.354561  525066 command_runner.go:130] > allowed_annotations = [
	I1212 00:29:38.354565  525066 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1212 00:29:38.354568  525066 command_runner.go:130] > ]
	I1212 00:29:38.354573  525066 command_runner.go:130] > privileged_without_host_devices = false
	I1212 00:29:38.354577  525066 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1212 00:29:38.354588  525066 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1212 00:29:38.354592  525066 command_runner.go:130] > runtime_type = ""
	I1212 00:29:38.354595  525066 command_runner.go:130] > runtime_root = "/run/runc"
	I1212 00:29:38.354647  525066 command_runner.go:130] > inherit_default_runtime = false
	I1212 00:29:38.354654  525066 command_runner.go:130] > runtime_config_path = ""
	I1212 00:29:38.354659  525066 command_runner.go:130] > container_min_memory = ""
	I1212 00:29:38.354663  525066 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1212 00:29:38.354667  525066 command_runner.go:130] > monitor_cgroup = "pod"
	I1212 00:29:38.354671  525066 command_runner.go:130] > monitor_exec_cgroup = ""
	I1212 00:29:38.354675  525066 command_runner.go:130] > privileged_without_host_devices = false
	I1212 00:29:38.354692  525066 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1212 00:29:38.354700  525066 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1212 00:29:38.354706  525066 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1212 00:29:38.354719  525066 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1212 00:29:38.354731  525066 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1212 00:29:38.354778  525066 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1212 00:29:38.354787  525066 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1212 00:29:38.354793  525066 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1212 00:29:38.354803  525066 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1212 00:29:38.354848  525066 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1212 00:29:38.354862  525066 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1212 00:29:38.354870  525066 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1212 00:29:38.354909  525066 command_runner.go:130] > # Example:
	I1212 00:29:38.354916  525066 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1212 00:29:38.354921  525066 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1212 00:29:38.354929  525066 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1212 00:29:38.354970  525066 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1212 00:29:38.354976  525066 command_runner.go:130] > # cpuset = "0-1"
	I1212 00:29:38.354979  525066 command_runner.go:130] > # cpushares = "5"
	I1212 00:29:38.354982  525066 command_runner.go:130] > # cpuquota = "1000"
	I1212 00:29:38.354986  525066 command_runner.go:130] > # cpuperiod = "100000"
	I1212 00:29:38.354989  525066 command_runner.go:130] > # cpulimit = "35"
	I1212 00:29:38.354992  525066 command_runner.go:130] > # Where:
	I1212 00:29:38.355002  525066 command_runner.go:130] > # The workload name is workload-type.
	I1212 00:29:38.355009  525066 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1212 00:29:38.355015  525066 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1212 00:29:38.355066  525066 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1212 00:29:38.355077  525066 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1212 00:29:38.355083  525066 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1212 00:29:38.355088  525066 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1212 00:29:38.355095  525066 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1212 00:29:38.355099  525066 command_runner.go:130] > # Default value is set to true
	I1212 00:29:38.355467  525066 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1212 00:29:38.355620  525066 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1212 00:29:38.355721  525066 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1212 00:29:38.355871  525066 command_runner.go:130] > # Default value is set to 'false'
	I1212 00:29:38.356033  525066 command_runner.go:130] > # disable_hostport_mapping = false
	I1212 00:29:38.356163  525066 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1212 00:29:38.356284  525066 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1212 00:29:38.356367  525066 command_runner.go:130] > # timezone = ""
	I1212 00:29:38.356485  525066 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1212 00:29:38.356560  525066 command_runner.go:130] > #
	I1212 00:29:38.356636  525066 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1212 00:29:38.356830  525066 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1212 00:29:38.356937  525066 command_runner.go:130] > [crio.image]
	I1212 00:29:38.357065  525066 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1212 00:29:38.357172  525066 command_runner.go:130] > # default_transport = "docker://"
	I1212 00:29:38.357258  525066 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1212 00:29:38.357455  525066 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1212 00:29:38.357729  525066 command_runner.go:130] > # global_auth_file = ""
	I1212 00:29:38.357787  525066 command_runner.go:130] > # The image used to instantiate infra containers.
	I1212 00:29:38.357796  525066 command_runner.go:130] > # This option supports live configuration reload.
	I1212 00:29:38.357801  525066 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1212 00:29:38.357809  525066 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1212 00:29:38.357821  525066 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1212 00:29:38.357827  525066 command_runner.go:130] > # This option supports live configuration reload.
	I1212 00:29:38.357837  525066 command_runner.go:130] > # pause_image_auth_file = ""
	I1212 00:29:38.357843  525066 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1212 00:29:38.357850  525066 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1212 00:29:38.358627  525066 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1212 00:29:38.358638  525066 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1212 00:29:38.358643  525066 command_runner.go:130] > # pause_command = "/pause"
	I1212 00:29:38.358649  525066 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1212 00:29:38.358655  525066 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1212 00:29:38.358662  525066 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1212 00:29:38.358668  525066 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1212 00:29:38.358674  525066 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1212 00:29:38.358693  525066 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1212 00:29:38.358700  525066 command_runner.go:130] > # pinned_images = [
	I1212 00:29:38.358703  525066 command_runner.go:130] > # ]
	I1212 00:29:38.358709  525066 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1212 00:29:38.358716  525066 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1212 00:29:38.358723  525066 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1212 00:29:38.358729  525066 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1212 00:29:38.358734  525066 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1212 00:29:38.358740  525066 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1212 00:29:38.358745  525066 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1212 00:29:38.358752  525066 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1212 00:29:38.358758  525066 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1212 00:29:38.358764  525066 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1212 00:29:38.358771  525066 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1212 00:29:38.358776  525066 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1212 00:29:38.358782  525066 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1212 00:29:38.358788  525066 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1212 00:29:38.358791  525066 command_runner.go:130] > # changing them here.
	I1212 00:29:38.358801  525066 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1212 00:29:38.358805  525066 command_runner.go:130] > # insecure_registries = [
	I1212 00:29:38.358808  525066 command_runner.go:130] > # ]
	I1212 00:29:38.358814  525066 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1212 00:29:38.358828  525066 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1212 00:29:38.358833  525066 command_runner.go:130] > # image_volumes = "mkdir"
	I1212 00:29:38.358838  525066 command_runner.go:130] > # Temporary directory to use for storing big files
	I1212 00:29:38.358842  525066 command_runner.go:130] > # big_files_temporary_dir = ""
	I1212 00:29:38.358848  525066 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1212 00:29:38.358855  525066 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1212 00:29:38.358860  525066 command_runner.go:130] > # auto_reload_registries = false
	I1212 00:29:38.358866  525066 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1212 00:29:38.358874  525066 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1212 00:29:38.358881  525066 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1212 00:29:38.358885  525066 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1212 00:29:38.358889  525066 command_runner.go:130] > # The mode of short name resolution.
	I1212 00:29:38.358896  525066 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1212 00:29:38.358903  525066 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1212 00:29:38.358908  525066 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1212 00:29:38.358913  525066 command_runner.go:130] > # short_name_mode = "enforcing"
	I1212 00:29:38.358919  525066 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1212 00:29:38.358925  525066 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1212 00:29:38.358929  525066 command_runner.go:130] > # oci_artifact_mount_support = true
	I1212 00:29:38.358935  525066 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1212 00:29:38.358938  525066 command_runner.go:130] > # CNI plugins.
	I1212 00:29:38.358941  525066 command_runner.go:130] > [crio.network]
	I1212 00:29:38.358947  525066 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1212 00:29:38.358952  525066 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1212 00:29:38.358956  525066 command_runner.go:130] > # cni_default_network = ""
	I1212 00:29:38.358966  525066 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1212 00:29:38.358970  525066 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1212 00:29:38.358975  525066 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1212 00:29:38.358979  525066 command_runner.go:130] > # plugin_dirs = [
	I1212 00:29:38.358982  525066 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1212 00:29:38.358985  525066 command_runner.go:130] > # ]
	I1212 00:29:38.358989  525066 command_runner.go:130] > # List of included pod metrics.
	I1212 00:29:38.358993  525066 command_runner.go:130] > # included_pod_metrics = [
	I1212 00:29:38.359000  525066 command_runner.go:130] > # ]
	I1212 00:29:38.359005  525066 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1212 00:29:38.359010  525066 command_runner.go:130] > [crio.metrics]
	I1212 00:29:38.359017  525066 command_runner.go:130] > # Globally enable or disable metrics support.
	I1212 00:29:38.359024  525066 command_runner.go:130] > # enable_metrics = false
	I1212 00:29:38.359029  525066 command_runner.go:130] > # Specify enabled metrics collectors.
	I1212 00:29:38.359034  525066 command_runner.go:130] > # Per default all metrics are enabled.
	I1212 00:29:38.359040  525066 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1212 00:29:38.359048  525066 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1212 00:29:38.359054  525066 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1212 00:29:38.359068  525066 command_runner.go:130] > # metrics_collectors = [
	I1212 00:29:38.359072  525066 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1212 00:29:38.359076  525066 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1212 00:29:38.359079  525066 command_runner.go:130] > # 	"containers_oom_total",
	I1212 00:29:38.359083  525066 command_runner.go:130] > # 	"processes_defunct",
	I1212 00:29:38.359087  525066 command_runner.go:130] > # 	"operations_total",
	I1212 00:29:38.359091  525066 command_runner.go:130] > # 	"operations_latency_seconds",
	I1212 00:29:38.359095  525066 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1212 00:29:38.359099  525066 command_runner.go:130] > # 	"operations_errors_total",
	I1212 00:29:38.359103  525066 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1212 00:29:38.359107  525066 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1212 00:29:38.359111  525066 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1212 00:29:38.359115  525066 command_runner.go:130] > # 	"image_pulls_success_total",
	I1212 00:29:38.359119  525066 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1212 00:29:38.359123  525066 command_runner.go:130] > # 	"containers_oom_count_total",
	I1212 00:29:38.359128  525066 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1212 00:29:38.359132  525066 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1212 00:29:38.359137  525066 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1212 00:29:38.359139  525066 command_runner.go:130] > # ]
	I1212 00:29:38.359145  525066 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1212 00:29:38.359149  525066 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1212 00:29:38.359155  525066 command_runner.go:130] > # The port on which the metrics server will listen.
	I1212 00:29:38.359158  525066 command_runner.go:130] > # metrics_port = 9090
	I1212 00:29:38.359167  525066 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1212 00:29:38.359171  525066 command_runner.go:130] > # metrics_socket = ""
	I1212 00:29:38.359176  525066 command_runner.go:130] > # The certificate for the secure metrics server.
	I1212 00:29:38.359182  525066 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1212 00:29:38.359188  525066 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1212 00:29:38.359192  525066 command_runner.go:130] > # certificate on any modification event.
	I1212 00:29:38.359196  525066 command_runner.go:130] > # metrics_cert = ""
	I1212 00:29:38.359201  525066 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1212 00:29:38.359206  525066 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1212 00:29:38.359209  525066 command_runner.go:130] > # metrics_key = ""
	I1212 00:29:38.359214  525066 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1212 00:29:38.359218  525066 command_runner.go:130] > [crio.tracing]
	I1212 00:29:38.359224  525066 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1212 00:29:38.359227  525066 command_runner.go:130] > # enable_tracing = false
	I1212 00:29:38.359233  525066 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1212 00:29:38.359237  525066 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1212 00:29:38.359243  525066 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1212 00:29:38.359249  525066 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1212 00:29:38.359253  525066 command_runner.go:130] > # CRI-O NRI configuration.
	I1212 00:29:38.359256  525066 command_runner.go:130] > [crio.nri]
	I1212 00:29:38.359260  525066 command_runner.go:130] > # Globally enable or disable NRI.
	I1212 00:29:38.359458  525066 command_runner.go:130] > # enable_nri = true
	I1212 00:29:38.359492  525066 command_runner.go:130] > # NRI socket to listen on.
	I1212 00:29:38.359531  525066 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1212 00:29:38.359552  525066 command_runner.go:130] > # NRI plugin directory to use.
	I1212 00:29:38.359571  525066 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1212 00:29:38.359603  525066 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1212 00:29:38.359625  525066 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1212 00:29:38.359646  525066 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1212 00:29:38.359766  525066 command_runner.go:130] > # nri_disable_connections = false
	I1212 00:29:38.359799  525066 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1212 00:29:38.359833  525066 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1212 00:29:38.359860  525066 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1212 00:29:38.359876  525066 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1212 00:29:38.359893  525066 command_runner.go:130] > # NRI default validator configuration.
	I1212 00:29:38.359933  525066 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1212 00:29:38.359959  525066 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1212 00:29:38.359990  525066 command_runner.go:130] > # can be restricted/rejected:
	I1212 00:29:38.360015  525066 command_runner.go:130] > # - OCI hook injection
	I1212 00:29:38.360033  525066 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1212 00:29:38.360064  525066 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1212 00:29:38.360089  525066 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1212 00:29:38.360107  525066 command_runner.go:130] > # - adjustment of linux namespaces
	I1212 00:29:38.360127  525066 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1212 00:29:38.360166  525066 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1212 00:29:38.360186  525066 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1212 00:29:38.360201  525066 command_runner.go:130] > #
	I1212 00:29:38.360237  525066 command_runner.go:130] > # [crio.nri.default_validator]
	I1212 00:29:38.360255  525066 command_runner.go:130] > # nri_enable_default_validator = false
	I1212 00:29:38.360272  525066 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1212 00:29:38.360303  525066 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1212 00:29:38.360330  525066 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1212 00:29:38.360348  525066 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1212 00:29:38.360476  525066 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1212 00:29:38.360648  525066 command_runner.go:130] > # nri_validator_required_plugins = [
	I1212 00:29:38.360681  525066 command_runner.go:130] > # ]
	I1212 00:29:38.360704  525066 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1212 00:29:38.360740  525066 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1212 00:29:38.360764  525066 command_runner.go:130] > [crio.stats]
	I1212 00:29:38.360783  525066 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1212 00:29:38.360814  525066 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1212 00:29:38.360847  525066 command_runner.go:130] > # stats_collection_period = 0
	I1212 00:29:38.360867  525066 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1212 00:29:38.360905  525066 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1212 00:29:38.360921  525066 command_runner.go:130] > # collection_period = 0
	I1212 00:29:38.360984  525066 command_runner.go:130] ! time="2025-12-12T00:29:38.313366715Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1212 00:29:38.361015  525066 command_runner.go:130] ! time="2025-12-12T00:29:38.313641917Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1212 00:29:38.361052  525066 command_runner.go:130] ! time="2025-12-12T00:29:38.313871475Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1212 00:29:38.361075  525066 command_runner.go:130] ! time="2025-12-12T00:29:38.314022397Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1212 00:29:38.361124  525066 command_runner.go:130] ! time="2025-12-12T00:29:38.314372427Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:29:38.361154  525066 command_runner.go:130] ! time="2025-12-12T00:29:38.31485409Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1212 00:29:38.361178  525066 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1212 00:29:38.361311  525066 cni.go:84] Creating CNI manager for ""
	I1212 00:29:38.361353  525066 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 00:29:38.361385  525066 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 00:29:38.361436  525066 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-035643 NodeName:functional-035643 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 00:29:38.361629  525066 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-035643"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 00:29:38.361753  525066 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 00:29:38.369085  525066 command_runner.go:130] > kubeadm
	I1212 00:29:38.369101  525066 command_runner.go:130] > kubectl
	I1212 00:29:38.369105  525066 command_runner.go:130] > kubelet
	I1212 00:29:38.369321  525066 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 00:29:38.369385  525066 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 00:29:38.376829  525066 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1212 00:29:38.389638  525066 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1212 00:29:38.402701  525066 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1212 00:29:38.415693  525066 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1212 00:29:38.420581  525066 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1212 00:29:38.420662  525066 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:29:38.566232  525066 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:29:39.219049  525066 certs.go:69] Setting up /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643 for IP: 192.168.49.2
	I1212 00:29:39.219079  525066 certs.go:195] generating shared ca certs ...
	I1212 00:29:39.219096  525066 certs.go:227] acquiring lock for ca certs: {Name:mk856824cf2126fa3d2975ef18e195b6ab1234f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:29:39.219238  525066 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22101-487723/.minikube/ca.key
	I1212 00:29:39.219285  525066 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22101-487723/.minikube/proxy-client-ca.key
	I1212 00:29:39.219292  525066 certs.go:257] generating profile certs ...
	I1212 00:29:39.219491  525066 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/client.key
	I1212 00:29:39.219603  525066 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/apiserver.key.8a9a2493
	I1212 00:29:39.219699  525066 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/proxy-client.key
	I1212 00:29:39.219742  525066 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 00:29:39.219761  525066 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1212 00:29:39.219773  525066 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 00:29:39.219783  525066 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 00:29:39.219798  525066 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1212 00:29:39.219843  525066 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1212 00:29:39.219860  525066 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1212 00:29:39.219871  525066 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1212 00:29:39.219967  525066 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/490954.pem (1338 bytes)
	W1212 00:29:39.220038  525066 certs.go:480] ignoring /home/jenkins/minikube-integration/22101-487723/.minikube/certs/490954_empty.pem, impossibly tiny 0 bytes
	I1212 00:29:39.220049  525066 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca-key.pem (1679 bytes)
	I1212 00:29:39.220117  525066 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca.pem (1078 bytes)
	I1212 00:29:39.220147  525066 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/cert.pem (1123 bytes)
	I1212 00:29:39.220202  525066 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/key.pem (1679 bytes)
	I1212 00:29:39.220256  525066 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-487723/.minikube/files/etc/ssl/certs/4909542.pem (1708 bytes)
	I1212 00:29:39.220332  525066 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/490954.pem -> /usr/share/ca-certificates/490954.pem
	I1212 00:29:39.220378  525066 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/files/etc/ssl/certs/4909542.pem -> /usr/share/ca-certificates/4909542.pem
	I1212 00:29:39.220396  525066 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:29:39.221003  525066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 00:29:39.242927  525066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 00:29:39.262484  525066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 00:29:39.285732  525066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 00:29:39.303346  525066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1212 00:29:39.320786  525066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 00:29:39.338821  525066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 00:29:39.356806  525066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 00:29:39.374381  525066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/certs/490954.pem --> /usr/share/ca-certificates/490954.pem (1338 bytes)
	I1212 00:29:39.392333  525066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/files/etc/ssl/certs/4909542.pem --> /usr/share/ca-certificates/4909542.pem (1708 bytes)
	I1212 00:29:39.410089  525066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 00:29:39.427383  525066 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 00:29:39.439725  525066 ssh_runner.go:195] Run: openssl version
	I1212 00:29:39.445636  525066 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1212 00:29:39.445982  525066 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/490954.pem
	I1212 00:29:39.453236  525066 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/490954.pem /etc/ssl/certs/490954.pem
	I1212 00:29:39.460672  525066 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/490954.pem
	I1212 00:29:39.464184  525066 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 12 00:21 /usr/share/ca-certificates/490954.pem
	I1212 00:29:39.464289  525066 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 00:21 /usr/share/ca-certificates/490954.pem
	I1212 00:29:39.464344  525066 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/490954.pem
	I1212 00:29:39.505960  525066 command_runner.go:130] > 51391683
	I1212 00:29:39.506560  525066 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 00:29:39.514611  525066 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4909542.pem
	I1212 00:29:39.522360  525066 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4909542.pem /etc/ssl/certs/4909542.pem
	I1212 00:29:39.531109  525066 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4909542.pem
	I1212 00:29:39.534913  525066 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 12 00:21 /usr/share/ca-certificates/4909542.pem
	I1212 00:29:39.535312  525066 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 00:21 /usr/share/ca-certificates/4909542.pem
	I1212 00:29:39.535374  525066 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4909542.pem
	I1212 00:29:39.578207  525066 command_runner.go:130] > 3ec20f2e
	I1212 00:29:39.578374  525066 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 00:29:39.586281  525066 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:29:39.593845  525066 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 00:29:39.601415  525066 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:29:39.605435  525066 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 12 00:11 /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:29:39.605483  525066 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 00:11 /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:29:39.605537  525066 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:29:39.646250  525066 command_runner.go:130] > b5213941
	I1212 00:29:39.646757  525066 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 00:29:39.654391  525066 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 00:29:39.658287  525066 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 00:29:39.658314  525066 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1212 00:29:39.658322  525066 command_runner.go:130] > Device: 259,1	Inode: 2360480     Links: 1
	I1212 00:29:39.658330  525066 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 00:29:39.658336  525066 command_runner.go:130] > Access: 2025-12-12 00:25:30.972268820 +0000
	I1212 00:29:39.658341  525066 command_runner.go:130] > Modify: 2025-12-12 00:21:25.329898534 +0000
	I1212 00:29:39.658346  525066 command_runner.go:130] > Change: 2025-12-12 00:21:25.329898534 +0000
	I1212 00:29:39.658351  525066 command_runner.go:130] >  Birth: 2025-12-12 00:21:25.329898534 +0000
	I1212 00:29:39.658416  525066 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 00:29:39.699997  525066 command_runner.go:130] > Certificate will not expire
	I1212 00:29:39.700109  525066 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 00:29:39.748952  525066 command_runner.go:130] > Certificate will not expire
	I1212 00:29:39.749499  525066 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 00:29:39.797710  525066 command_runner.go:130] > Certificate will not expire
	I1212 00:29:39.798154  525066 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 00:29:39.843103  525066 command_runner.go:130] > Certificate will not expire
	I1212 00:29:39.843601  525066 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 00:29:39.887374  525066 command_runner.go:130] > Certificate will not expire
	I1212 00:29:39.887871  525066 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 00:29:39.942362  525066 command_runner.go:130] > Certificate will not expire
	I1212 00:29:39.942946  525066 kubeadm.go:401] StartCluster: {Name:functional-035643 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-035643 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:29:39.943046  525066 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 00:29:39.943208  525066 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 00:29:39.985575  525066 cri.go:89] found id: ""
	I1212 00:29:39.985700  525066 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 00:29:39.993609  525066 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1212 00:29:39.993681  525066 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1212 00:29:39.993702  525066 command_runner.go:130] > /var/lib/minikube/etcd:
	I1212 00:29:39.994895  525066 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1212 00:29:39.994945  525066 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1212 00:29:39.995038  525066 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 00:29:40.006978  525066 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:29:40.007554  525066 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-035643" does not appear in /home/jenkins/minikube-integration/22101-487723/kubeconfig
	I1212 00:29:40.007785  525066 kubeconfig.go:62] /home/jenkins/minikube-integration/22101-487723/kubeconfig needs updating (will repair): [kubeconfig missing "functional-035643" cluster setting kubeconfig missing "functional-035643" context setting]
	I1212 00:29:40.008175  525066 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-487723/kubeconfig: {Name:mk40d877648a1b47389942ad828ec218ac64f642 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:29:40.008787  525066 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22101-487723/kubeconfig
	I1212 00:29:40.009179  525066 kapi.go:59] client config for functional-035643: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/client.crt", KeyFile:"/home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/client.key", CAFile:"/home/jenkins/minikube-integration/22101-487723/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 00:29:40.009975  525066 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1212 00:29:40.010118  525066 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1212 00:29:40.010148  525066 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1212 00:29:40.010168  525066 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1212 00:29:40.010204  525066 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1212 00:29:40.010223  525066 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1212 00:29:40.010646  525066 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 00:29:40.025803  525066 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1212 00:29:40.025893  525066 kubeadm.go:602] duration metric: took 30.929693ms to restartPrimaryControlPlane
	I1212 00:29:40.025918  525066 kubeadm.go:403] duration metric: took 82.978705ms to StartCluster
	I1212 00:29:40.025961  525066 settings.go:142] acquiring lock: {Name:mk274c10b2238dc32d72b68ac2e1ec517b8a72b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:29:40.026057  525066 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22101-487723/kubeconfig
	I1212 00:29:40.026847  525066 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-487723/kubeconfig: {Name:mk40d877648a1b47389942ad828ec218ac64f642 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:29:40.027182  525066 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 00:29:40.027614  525066 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 00:29:40.027718  525066 addons.go:70] Setting storage-provisioner=true in profile "functional-035643"
	I1212 00:29:40.027733  525066 addons.go:239] Setting addon storage-provisioner=true in "functional-035643"
	I1212 00:29:40.027759  525066 host.go:66] Checking if "functional-035643" exists ...
	I1212 00:29:40.027683  525066 config.go:182] Loaded profile config "functional-035643": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 00:29:40.027963  525066 addons.go:70] Setting default-storageclass=true in profile "functional-035643"
	I1212 00:29:40.028014  525066 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-035643"
	I1212 00:29:40.028265  525066 cli_runner.go:164] Run: docker container inspect functional-035643 --format={{.State.Status}}
	I1212 00:29:40.028431  525066 cli_runner.go:164] Run: docker container inspect functional-035643 --format={{.State.Status}}
	I1212 00:29:40.031408  525066 out.go:179] * Verifying Kubernetes components...
	I1212 00:29:40.035144  525066 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:29:40.072983  525066 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22101-487723/kubeconfig
	I1212 00:29:40.073191  525066 kapi.go:59] client config for functional-035643: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/client.crt", KeyFile:"/home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/client.key", CAFile:"/home/jenkins/minikube-integration/22101-487723/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 00:29:40.073564  525066 addons.go:239] Setting addon default-storageclass=true in "functional-035643"
	I1212 00:29:40.073635  525066 host.go:66] Checking if "functional-035643" exists ...
	I1212 00:29:40.074143  525066 cli_runner.go:164] Run: docker container inspect functional-035643 --format={{.State.Status}}
	I1212 00:29:40.079735  525066 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 00:29:40.083203  525066 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:29:40.083224  525066 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 00:29:40.083308  525066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
	I1212 00:29:40.126926  525066 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 00:29:40.126953  525066 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 00:29:40.127024  525066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
	I1212 00:29:40.157562  525066 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/functional-035643/id_rsa Username:docker}
	I1212 00:29:40.176759  525066 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/functional-035643/id_rsa Username:docker}
	I1212 00:29:40.228329  525066 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:29:40.297459  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:29:40.324896  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:29:40.970121  525066 node_ready.go:35] waiting up to 6m0s for node "functional-035643" to be "Ready" ...
	I1212 00:29:40.970322  525066 type.go:168] "Request Body" body=""
	I1212 00:29:40.970407  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:40.970561  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:29:40.970616  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:40.970718  525066 retry.go:31] will retry after 204.18222ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:40.970890  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:29:40.970976  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:40.971113  525066 retry.go:31] will retry after 159.994769ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:40.971100  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:41.131658  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:29:41.175423  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:29:41.193550  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:29:41.193607  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:41.193625  525066 retry.go:31] will retry after 255.861028ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:41.245543  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:29:41.245583  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:41.245622  525066 retry.go:31] will retry after 363.545377ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:41.449762  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:29:41.471214  525066 type.go:168] "Request Body" body=""
	I1212 00:29:41.471319  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:41.471599  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:41.515695  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:29:41.515762  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:41.515785  525066 retry.go:31] will retry after 558.343872ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:41.610204  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:29:41.681946  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:29:41.682005  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:41.682029  525066 retry.go:31] will retry after 553.13192ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:41.971401  525066 type.go:168] "Request Body" body=""
	I1212 00:29:41.971545  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:41.971960  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:42.075338  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:29:42.153789  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:29:42.153831  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:42.153875  525066 retry.go:31] will retry after 562.779161ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:42.238244  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:29:42.309134  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:29:42.309235  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:42.309278  525066 retry.go:31] will retry after 839.848798ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:42.470350  525066 type.go:168] "Request Body" body=""
	I1212 00:29:42.470438  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:42.470717  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:42.717299  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:29:42.779260  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:29:42.779300  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:42.779319  525066 retry.go:31] will retry after 1.384955704s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:42.970802  525066 type.go:168] "Request Body" body=""
	I1212 00:29:42.970878  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:42.971167  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:29:42.971212  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:29:43.149494  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:29:43.213920  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:29:43.218125  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:43.218200  525066 retry.go:31] will retry after 1.154245365s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:43.470517  525066 type.go:168] "Request Body" body=""
	I1212 00:29:43.470604  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:43.470922  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:43.970580  525066 type.go:168] "Request Body" body=""
	I1212 00:29:43.970743  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:43.971073  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:44.165470  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:29:44.225816  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:29:44.225880  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:44.225901  525066 retry.go:31] will retry after 2.063043455s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:44.373318  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:29:44.437999  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:29:44.441831  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:44.441865  525066 retry.go:31] will retry after 1.856604218s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:44.471071  525066 type.go:168] "Request Body" body=""
	I1212 00:29:44.471144  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:44.471437  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:44.971289  525066 type.go:168] "Request Body" body=""
	I1212 00:29:44.971379  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:44.971730  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:29:44.971780  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:29:45.470482  525066 type.go:168] "Request Body" body=""
	I1212 00:29:45.470622  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:45.470959  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:45.970491  525066 type.go:168] "Request Body" body=""
	I1212 00:29:45.970565  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:45.970940  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:46.289221  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:29:46.298644  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:29:46.387298  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:29:46.387341  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:46.387359  525066 retry.go:31] will retry after 2.162137781s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:46.389923  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:29:46.389964  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:46.389984  525066 retry.go:31] will retry after 2.885458194s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:46.471167  525066 type.go:168] "Request Body" body=""
	I1212 00:29:46.471247  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:46.471565  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:46.971278  525066 type.go:168] "Request Body" body=""
	I1212 00:29:46.971393  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:46.971713  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:29:46.971800  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:29:47.471406  525066 type.go:168] "Request Body" body=""
	I1212 00:29:47.471481  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:47.471794  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:47.970503  525066 type.go:168] "Request Body" body=""
	I1212 00:29:47.970590  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:47.970978  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:48.470459  525066 type.go:168] "Request Body" body=""
	I1212 00:29:48.470563  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:48.470882  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:48.550228  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:29:48.609468  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:29:48.609564  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:48.609586  525066 retry.go:31] will retry after 5.142469671s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:48.970999  525066 type.go:168] "Request Body" body=""
	I1212 00:29:48.971081  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:48.971378  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:49.275822  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:29:49.338921  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:29:49.338964  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:49.338982  525066 retry.go:31] will retry after 3.130992497s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:49.471334  525066 type.go:168] "Request Body" body=""
	I1212 00:29:49.471407  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:49.471715  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:29:49.471774  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:29:49.970357  525066 type.go:168] "Request Body" body=""
	I1212 00:29:49.970428  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:49.970800  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:50.470449  525066 type.go:168] "Request Body" body=""
	I1212 00:29:50.470521  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:50.470885  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:50.970632  525066 type.go:168] "Request Body" body=""
	I1212 00:29:50.970736  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:50.971135  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:51.470850  525066 type.go:168] "Request Body" body=""
	I1212 00:29:51.470934  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:51.471301  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:51.971160  525066 type.go:168] "Request Body" body=""
	I1212 00:29:51.971232  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:51.971562  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:29:51.971629  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:29:52.470175  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:29:52.470342  525066 type.go:168] "Request Body" body=""
	I1212 00:29:52.470395  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:52.470704  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:52.525865  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:29:52.529169  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:52.529199  525066 retry.go:31] will retry after 5.202817608s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:52.970512  525066 type.go:168] "Request Body" body=""
	I1212 00:29:52.970577  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:52.970929  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:53.470488  525066 type.go:168] "Request Body" body=""
	I1212 00:29:53.470560  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:53.470915  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:53.752286  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:29:53.818071  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:29:53.818120  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:53.818138  525066 retry.go:31] will retry after 7.493688168s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:53.970432  525066 type.go:168] "Request Body" body=""
	I1212 00:29:53.970529  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:53.970820  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:54.470420  525066 type.go:168] "Request Body" body=""
	I1212 00:29:54.470487  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:54.470795  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:29:54.470851  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:29:54.970811  525066 type.go:168] "Request Body" body=""
	I1212 00:29:54.970890  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:54.971241  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:55.471081  525066 type.go:168] "Request Body" body=""
	I1212 00:29:55.471155  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:55.471463  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:55.971189  525066 type.go:168] "Request Body" body=""
	I1212 00:29:55.971259  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:55.971627  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:56.470367  525066 type.go:168] "Request Body" body=""
	I1212 00:29:56.470446  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:56.470766  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:56.970401  525066 type.go:168] "Request Body" body=""
	I1212 00:29:56.970473  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:56.970813  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:29:56.970885  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:29:57.470373  525066 type.go:168] "Request Body" body=""
	I1212 00:29:57.470446  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:57.470716  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:57.732201  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:29:57.788085  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:29:57.792139  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:57.792170  525066 retry.go:31] will retry after 6.658571386s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:29:57.970423  525066 type.go:168] "Request Body" body=""
	I1212 00:29:57.970495  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:57.970833  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:58.470545  525066 type.go:168] "Request Body" body=""
	I1212 00:29:58.470620  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:58.470971  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:58.970653  525066 type.go:168] "Request Body" body=""
	I1212 00:29:58.970748  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:58.971004  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:29:58.971063  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:29:59.470479  525066 type.go:168] "Request Body" body=""
	I1212 00:29:59.470553  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:59.470886  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:29:59.970903  525066 type.go:168] "Request Body" body=""
	I1212 00:29:59.970985  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:29:59.971299  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:00.470879  525066 type.go:168] "Request Body" body=""
	I1212 00:30:00.470978  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:00.471351  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:00.971213  525066 type.go:168] "Request Body" body=""
	I1212 00:30:00.971307  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:00.971736  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:00.971826  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:01.312112  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:30:01.378306  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:30:01.384542  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:30:01.384581  525066 retry.go:31] will retry after 9.383564416s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:30:01.470976  525066 type.go:168] "Request Body" body=""
	I1212 00:30:01.471119  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:01.471452  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:01.971252  525066 type.go:168] "Request Body" body=""
	I1212 00:30:01.971351  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:01.971665  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:02.470427  525066 type.go:168] "Request Body" body=""
	I1212 00:30:02.470507  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:02.470916  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:02.970616  525066 type.go:168] "Request Body" body=""
	I1212 00:30:02.970721  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:02.971066  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:03.470621  525066 type.go:168] "Request Body" body=""
	I1212 00:30:03.470716  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:03.470992  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:03.471037  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:03.970767  525066 type.go:168] "Request Body" body=""
	I1212 00:30:03.970845  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:03.971214  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:04.450915  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:30:04.471249  525066 type.go:168] "Request Body" body=""
	I1212 00:30:04.471318  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:04.471581  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:04.504992  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:30:04.508551  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:30:04.508584  525066 retry.go:31] will retry after 16.635241248s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:30:04.971271  525066 type.go:168] "Request Body" body=""
	I1212 00:30:04.971364  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:04.971628  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:05.470444  525066 type.go:168] "Request Body" body=""
	I1212 00:30:05.470516  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:05.470888  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:05.970469  525066 type.go:168] "Request Body" body=""
	I1212 00:30:05.970547  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:05.970907  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:05.970959  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:06.470384  525066 type.go:168] "Request Body" body=""
	I1212 00:30:06.470477  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:06.470800  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:06.970490  525066 type.go:168] "Request Body" body=""
	I1212 00:30:06.970569  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:06.970938  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:07.470503  525066 type.go:168] "Request Body" body=""
	I1212 00:30:07.470599  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:07.470930  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:07.970442  525066 type.go:168] "Request Body" body=""
	I1212 00:30:07.970519  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:07.970789  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:08.470442  525066 type.go:168] "Request Body" body=""
	I1212 00:30:08.470518  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:08.470850  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:08.470905  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:08.970456  525066 type.go:168] "Request Body" body=""
	I1212 00:30:08.970529  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:08.970891  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:09.470554  525066 type.go:168] "Request Body" body=""
	I1212 00:30:09.470632  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:09.470900  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:09.970929  525066 type.go:168] "Request Body" body=""
	I1212 00:30:09.971012  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:09.971327  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:10.470376  525066 type.go:168] "Request Body" body=""
	I1212 00:30:10.470457  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:10.470750  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:10.768281  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:30:10.825103  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:30:10.828984  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:30:10.829014  525066 retry.go:31] will retry after 8.149625317s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:30:10.971311  525066 type.go:168] "Request Body" body=""
	I1212 00:30:10.971379  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:10.971644  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:10.971683  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:11.470436  525066 type.go:168] "Request Body" body=""
	I1212 00:30:11.470511  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:11.470843  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:11.970432  525066 type.go:168] "Request Body" body=""
	I1212 00:30:11.970527  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:11.970846  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:12.470525  525066 type.go:168] "Request Body" body=""
	I1212 00:30:12.470603  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:12.470941  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:12.970791  525066 type.go:168] "Request Body" body=""
	I1212 00:30:12.970866  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:12.971205  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:13.470475  525066 type.go:168] "Request Body" body=""
	I1212 00:30:13.470560  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:13.470875  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:13.470931  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:13.970549  525066 type.go:168] "Request Body" body=""
	I1212 00:30:13.970621  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:13.970905  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:14.470469  525066 type.go:168] "Request Body" body=""
	I1212 00:30:14.470545  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:14.470911  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:14.970901  525066 type.go:168] "Request Body" body=""
	I1212 00:30:14.970980  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:14.971358  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:15.471006  525066 type.go:168] "Request Body" body=""
	I1212 00:30:15.471085  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:15.471350  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:15.471390  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:15.971171  525066 type.go:168] "Request Body" body=""
	I1212 00:30:15.971259  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:15.971595  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:16.471255  525066 type.go:168] "Request Body" body=""
	I1212 00:30:16.471330  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:16.471636  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:16.970382  525066 type.go:168] "Request Body" body=""
	I1212 00:30:16.970466  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:16.970768  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:17.470446  525066 type.go:168] "Request Body" body=""
	I1212 00:30:17.470517  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:17.470833  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:17.970436  525066 type.go:168] "Request Body" body=""
	I1212 00:30:17.970514  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:17.970839  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:17.970896  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:18.470516  525066 type.go:168] "Request Body" body=""
	I1212 00:30:18.470594  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:18.470885  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:18.970641  525066 type.go:168] "Request Body" body=""
	I1212 00:30:18.970763  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:18.971104  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:18.979423  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:30:19.044083  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:30:19.044119  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:30:19.044140  525066 retry.go:31] will retry after 30.537522265s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:30:19.470570  525066 type.go:168] "Request Body" body=""
	I1212 00:30:19.470653  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:19.471007  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:19.971048  525066 type.go:168] "Request Body" body=""
	I1212 00:30:19.971122  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:19.971389  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:19.971439  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:20.470412  525066 type.go:168] "Request Body" body=""
	I1212 00:30:20.470491  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:20.470835  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:20.970464  525066 type.go:168] "Request Body" body=""
	I1212 00:30:20.970544  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:20.970890  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:21.144446  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:30:21.207915  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:30:21.207964  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:30:21.207983  525066 retry.go:31] will retry after 20.295589284s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:30:21.471340  525066 type.go:168] "Request Body" body=""
	I1212 00:30:21.471410  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:21.471696  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:21.970430  525066 type.go:168] "Request Body" body=""
	I1212 00:30:21.970500  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:21.970808  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:22.470556  525066 type.go:168] "Request Body" body=""
	I1212 00:30:22.470633  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:22.470953  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:22.471006  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:22.970442  525066 type.go:168] "Request Body" body=""
	I1212 00:30:22.970508  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:22.970782  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:23.470501  525066 type.go:168] "Request Body" body=""
	I1212 00:30:23.470601  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:23.470922  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:23.970478  525066 type.go:168] "Request Body" body=""
	I1212 00:30:23.970553  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:23.970885  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:24.470551  525066 type.go:168] "Request Body" body=""
	I1212 00:30:24.470618  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:24.470920  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:24.971014  525066 type.go:168] "Request Body" body=""
	I1212 00:30:24.971090  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:24.971391  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:24.971444  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:25.471210  525066 type.go:168] "Request Body" body=""
	I1212 00:30:25.471284  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:25.471604  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:25.971349  525066 type.go:168] "Request Body" body=""
	I1212 00:30:25.971417  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:25.971673  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:26.470375  525066 type.go:168] "Request Body" body=""
	I1212 00:30:26.470450  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:26.470752  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:26.970468  525066 type.go:168] "Request Body" body=""
	I1212 00:30:26.970568  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:26.970900  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:27.470551  525066 type.go:168] "Request Body" body=""
	I1212 00:30:27.470632  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:27.470951  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:27.471009  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:27.970461  525066 type.go:168] "Request Body" body=""
	I1212 00:30:27.970535  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:27.970896  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:28.470447  525066 type.go:168] "Request Body" body=""
	I1212 00:30:28.470517  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:28.470841  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:28.970537  525066 type.go:168] "Request Body" body=""
	I1212 00:30:28.970615  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:28.970891  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:29.470450  525066 type.go:168] "Request Body" body=""
	I1212 00:30:29.470530  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:29.470908  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:29.970898  525066 type.go:168] "Request Body" body=""
	I1212 00:30:29.970970  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:29.971305  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:29.971361  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:30.470857  525066 type.go:168] "Request Body" body=""
	I1212 00:30:30.470924  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:30.471192  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:30.971054  525066 type.go:168] "Request Body" body=""
	I1212 00:30:30.971147  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:30.971476  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:31.471280  525066 type.go:168] "Request Body" body=""
	I1212 00:30:31.471354  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:31.471652  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:31.970396  525066 type.go:168] "Request Body" body=""
	I1212 00:30:31.970469  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:31.970748  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:32.470446  525066 type.go:168] "Request Body" body=""
	I1212 00:30:32.470524  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:32.470875  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:32.470929  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:32.970467  525066 type.go:168] "Request Body" body=""
	I1212 00:30:32.970548  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:32.970910  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:33.470548  525066 type.go:168] "Request Body" body=""
	I1212 00:30:33.470621  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:33.470958  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:33.970663  525066 type.go:168] "Request Body" body=""
	I1212 00:30:33.970768  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:33.971120  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:34.470641  525066 type.go:168] "Request Body" body=""
	I1212 00:30:34.470734  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:34.471055  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:34.471106  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:34.971029  525066 type.go:168] "Request Body" body=""
	I1212 00:30:34.971106  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:34.971362  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:35.471168  525066 type.go:168] "Request Body" body=""
	I1212 00:30:35.471237  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:35.471543  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:35.971213  525066 type.go:168] "Request Body" body=""
	I1212 00:30:35.971284  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:35.971613  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:36.471350  525066 type.go:168] "Request Body" body=""
	I1212 00:30:36.471428  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:36.471693  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:36.471739  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:36.970449  525066 type.go:168] "Request Body" body=""
	I1212 00:30:36.970521  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:36.970836  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:37.470446  525066 type.go:168] "Request Body" body=""
	I1212 00:30:37.470518  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:37.470867  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:37.970336  525066 type.go:168] "Request Body" body=""
	I1212 00:30:37.970408  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:37.970717  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:38.470440  525066 type.go:168] "Request Body" body=""
	I1212 00:30:38.470510  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:38.470841  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:38.970461  525066 type.go:168] "Request Body" body=""
	I1212 00:30:38.970541  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:38.970920  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:38.970990  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:39.470649  525066 type.go:168] "Request Body" body=""
	I1212 00:30:39.470739  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:39.471073  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:39.970912  525066 type.go:168] "Request Body" body=""
	I1212 00:30:39.970992  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:39.971332  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:40.471276  525066 type.go:168] "Request Body" body=""
	I1212 00:30:40.471354  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:40.471676  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:40.970403  525066 type.go:168] "Request Body" body=""
	I1212 00:30:40.970481  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:40.970820  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:41.470514  525066 type.go:168] "Request Body" body=""
	I1212 00:30:41.470595  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:41.470937  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:41.471004  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:41.504392  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:30:41.561180  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:30:41.564784  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:30:41.564819  525066 retry.go:31] will retry after 29.925155821s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:30:41.971369  525066 type.go:168] "Request Body" body=""
	I1212 00:30:41.971443  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:41.971817  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:42.470482  525066 type.go:168] "Request Body" body=""
	I1212 00:30:42.470569  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:42.470884  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:42.970674  525066 type.go:168] "Request Body" body=""
	I1212 00:30:42.970766  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:42.971092  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:43.470816  525066 type.go:168] "Request Body" body=""
	I1212 00:30:43.470887  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:43.471196  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:43.471261  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:43.970994  525066 type.go:168] "Request Body" body=""
	I1212 00:30:43.971095  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:43.971420  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:44.471076  525066 type.go:168] "Request Body" body=""
	I1212 00:30:44.471150  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:44.471470  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:44.971260  525066 type.go:168] "Request Body" body=""
	I1212 00:30:44.971332  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:44.971645  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:45.470349  525066 type.go:168] "Request Body" body=""
	I1212 00:30:45.470425  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:45.470820  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:45.970433  525066 type.go:168] "Request Body" body=""
	I1212 00:30:45.970513  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:45.970829  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:45.970886  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:46.470441  525066 type.go:168] "Request Body" body=""
	I1212 00:30:46.470539  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:46.470878  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:46.970390  525066 type.go:168] "Request Body" body=""
	I1212 00:30:46.970456  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:46.970764  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:47.470436  525066 type.go:168] "Request Body" body=""
	I1212 00:30:47.470515  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:47.470849  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:47.970601  525066 type.go:168] "Request Body" body=""
	I1212 00:30:47.970697  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:47.970992  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:47.971047  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:48.470367  525066 type.go:168] "Request Body" body=""
	I1212 00:30:48.470433  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:48.470671  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:48.970405  525066 type.go:168] "Request Body" body=""
	I1212 00:30:48.970490  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:48.970853  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:49.470483  525066 type.go:168] "Request Body" body=""
	I1212 00:30:49.470560  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:49.470858  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:49.582168  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:30:49.635241  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:30:49.638539  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:30:49.638564  525066 retry.go:31] will retry after 36.706436998s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:30:49.971245  525066 type.go:168] "Request Body" body=""
	I1212 00:30:49.971317  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:49.971579  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:49.971624  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:50.470498  525066 type.go:168] "Request Body" body=""
	I1212 00:30:50.470570  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:50.470924  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:50.970508  525066 type.go:168] "Request Body" body=""
	I1212 00:30:50.970583  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:50.970916  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:51.470404  525066 type.go:168] "Request Body" body=""
	I1212 00:30:51.470472  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:51.470752  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:51.970450  525066 type.go:168] "Request Body" body=""
	I1212 00:30:51.970531  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:51.970886  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:52.470591  525066 type.go:168] "Request Body" body=""
	I1212 00:30:52.470671  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:52.470990  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:52.471051  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:52.970388  525066 type.go:168] "Request Body" body=""
	I1212 00:30:52.970466  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:52.970738  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:53.470464  525066 type.go:168] "Request Body" body=""
	I1212 00:30:53.470534  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:53.470877  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:53.970467  525066 type.go:168] "Request Body" body=""
	I1212 00:30:53.970540  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:53.970875  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:54.470561  525066 type.go:168] "Request Body" body=""
	I1212 00:30:54.470625  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:54.470882  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:54.970766  525066 type.go:168] "Request Body" body=""
	I1212 00:30:54.970842  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:54.971159  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:54.971210  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:55.470744  525066 type.go:168] "Request Body" body=""
	I1212 00:30:55.470816  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:55.471136  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:55.970919  525066 type.go:168] "Request Body" body=""
	I1212 00:30:55.970989  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:55.971245  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:56.471021  525066 type.go:168] "Request Body" body=""
	I1212 00:30:56.471102  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:56.471459  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:56.971297  525066 type.go:168] "Request Body" body=""
	I1212 00:30:56.971380  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:56.971721  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:56.971777  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:57.470392  525066 type.go:168] "Request Body" body=""
	I1212 00:30:57.470466  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:57.470735  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:57.970426  525066 type.go:168] "Request Body" body=""
	I1212 00:30:57.970498  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:57.970846  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:58.470550  525066 type.go:168] "Request Body" body=""
	I1212 00:30:58.470627  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:58.470983  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:58.970661  525066 type.go:168] "Request Body" body=""
	I1212 00:30:58.970747  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:58.971040  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:30:59.470746  525066 type.go:168] "Request Body" body=""
	I1212 00:30:59.470824  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:59.471166  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:30:59.471220  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:30:59.970964  525066 type.go:168] "Request Body" body=""
	I1212 00:30:59.971041  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:30:59.971352  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:00.470404  525066 type.go:168] "Request Body" body=""
	I1212 00:31:00.470477  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:00.470773  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:00.970466  525066 type.go:168] "Request Body" body=""
	I1212 00:31:00.970543  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:00.970928  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:01.470645  525066 type.go:168] "Request Body" body=""
	I1212 00:31:01.470749  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:01.471096  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:01.970787  525066 type.go:168] "Request Body" body=""
	I1212 00:31:01.970856  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:01.971135  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:01.971178  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:02.470982  525066 type.go:168] "Request Body" body=""
	I1212 00:31:02.471077  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:02.471408  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:02.971193  525066 type.go:168] "Request Body" body=""
	I1212 00:31:02.971269  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:02.971592  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:03.471340  525066 type.go:168] "Request Body" body=""
	I1212 00:31:03.471407  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:03.471649  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:03.970329  525066 type.go:168] "Request Body" body=""
	I1212 00:31:03.970409  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:03.970755  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:04.470469  525066 type.go:168] "Request Body" body=""
	I1212 00:31:04.470553  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:04.470917  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:04.470977  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:04.971081  525066 type.go:168] "Request Body" body=""
	I1212 00:31:04.971152  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:04.971443  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:05.471286  525066 type.go:168] "Request Body" body=""
	I1212 00:31:05.471363  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:05.471677  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:05.970382  525066 type.go:168] "Request Body" body=""
	I1212 00:31:05.970464  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:05.970758  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:06.470420  525066 type.go:168] "Request Body" body=""
	I1212 00:31:06.470484  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:06.470788  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:06.970555  525066 type.go:168] "Request Body" body=""
	I1212 00:31:06.970636  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:06.970974  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:06.971042  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:07.470458  525066 type.go:168] "Request Body" body=""
	I1212 00:31:07.470530  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:07.470902  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:07.970448  525066 type.go:168] "Request Body" body=""
	I1212 00:31:07.970518  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:07.970835  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:08.470450  525066 type.go:168] "Request Body" body=""
	I1212 00:31:08.470521  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:08.470867  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:08.970558  525066 type.go:168] "Request Body" body=""
	I1212 00:31:08.970638  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:08.970976  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:09.470659  525066 type.go:168] "Request Body" body=""
	I1212 00:31:09.470748  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:09.471069  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:09.471163  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:09.971114  525066 type.go:168] "Request Body" body=""
	I1212 00:31:09.971187  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:09.971512  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:10.470533  525066 type.go:168] "Request Body" body=""
	I1212 00:31:10.470613  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:10.470969  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:10.970732  525066 type.go:168] "Request Body" body=""
	I1212 00:31:10.970807  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:10.971084  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:11.470443  525066 type.go:168] "Request Body" body=""
	I1212 00:31:11.470518  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:11.470882  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:11.491140  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:31:11.552135  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:31:11.552186  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:31:11.552275  525066 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1212 00:31:11.970638  525066 type.go:168] "Request Body" body=""
	I1212 00:31:11.970757  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:11.971089  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:11.971151  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:12.470532  525066 type.go:168] "Request Body" body=""
	I1212 00:31:12.470609  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:12.470899  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:12.970411  525066 type.go:168] "Request Body" body=""
	I1212 00:31:12.970504  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:12.970815  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:13.470503  525066 type.go:168] "Request Body" body=""
	I1212 00:31:13.470574  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:13.470924  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:13.970619  525066 type.go:168] "Request Body" body=""
	I1212 00:31:13.970706  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:13.970963  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:14.470738  525066 type.go:168] "Request Body" body=""
	I1212 00:31:14.470812  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:14.471133  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:14.471187  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:14.971146  525066 type.go:168] "Request Body" body=""
	I1212 00:31:14.971222  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:14.971541  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:15.471278  525066 type.go:168] "Request Body" body=""
	I1212 00:31:15.471365  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:15.471609  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:15.970340  525066 type.go:168] "Request Body" body=""
	I1212 00:31:15.970431  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:15.970804  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:16.470509  525066 type.go:168] "Request Body" body=""
	I1212 00:31:16.470591  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:16.470920  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:16.970406  525066 type.go:168] "Request Body" body=""
	I1212 00:31:16.970483  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:16.970790  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:16.970848  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:17.470447  525066 type.go:168] "Request Body" body=""
	I1212 00:31:17.470519  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:17.470842  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:17.970550  525066 type.go:168] "Request Body" body=""
	I1212 00:31:17.970627  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:17.970937  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:18.470378  525066 type.go:168] "Request Body" body=""
	I1212 00:31:18.470445  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:18.470742  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:18.970443  525066 type.go:168] "Request Body" body=""
	I1212 00:31:18.970523  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:18.970864  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:18.970923  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:19.470460  525066 type.go:168] "Request Body" body=""
	I1212 00:31:19.470532  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:19.470860  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:19.970827  525066 type.go:168] "Request Body" body=""
	I1212 00:31:19.970900  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:19.971156  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:20.471114  525066 type.go:168] "Request Body" body=""
	I1212 00:31:20.471192  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:20.471496  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:20.971299  525066 type.go:168] "Request Body" body=""
	I1212 00:31:20.971376  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:20.971723  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:20.971777  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:21.471362  525066 type.go:168] "Request Body" body=""
	I1212 00:31:21.471430  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:21.471729  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:21.970437  525066 type.go:168] "Request Body" body=""
	I1212 00:31:21.970510  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:21.970868  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:22.470577  525066 type.go:168] "Request Body" body=""
	I1212 00:31:22.470650  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:22.470985  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:22.970698  525066 type.go:168] "Request Body" body=""
	I1212 00:31:22.970765  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:22.971007  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:23.470436  525066 type.go:168] "Request Body" body=""
	I1212 00:31:23.470511  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:23.470861  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:23.470914  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:23.970561  525066 type.go:168] "Request Body" body=""
	I1212 00:31:23.970643  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:23.970973  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:24.470353  525066 type.go:168] "Request Body" body=""
	I1212 00:31:24.470466  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:24.470739  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:24.970663  525066 type.go:168] "Request Body" body=""
	I1212 00:31:24.970762  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:24.971091  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:25.470468  525066 type.go:168] "Request Body" body=""
	I1212 00:31:25.470542  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:25.470865  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:25.970393  525066 type.go:168] "Request Body" body=""
	I1212 00:31:25.970464  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:25.970807  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:25.970864  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:26.345425  525066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:31:26.402811  525066 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:31:26.406955  525066 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:31:26.407059  525066 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1212 00:31:26.410095  525066 out.go:179] * Enabled addons: 
	I1212 00:31:26.413891  525066 addons.go:530] duration metric: took 1m46.38627975s for enable addons: enabled=[]
	I1212 00:31:26.471160  525066 type.go:168] "Request Body" body=""
	I1212 00:31:26.471237  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:26.471562  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:26.971357  525066 type.go:168] "Request Body" body=""
	I1212 00:31:26.971432  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:26.971737  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:27.470432  525066 type.go:168] "Request Body" body=""
	I1212 00:31:27.470500  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:27.470799  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:27.970424  525066 type.go:168] "Request Body" body=""
	I1212 00:31:27.970502  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:27.970862  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:27.970917  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:28.470589  525066 type.go:168] "Request Body" body=""
	I1212 00:31:28.470667  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:28.470983  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:28.970654  525066 type.go:168] "Request Body" body=""
	I1212 00:31:28.970741  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:28.970990  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:29.470447  525066 type.go:168] "Request Body" body=""
	I1212 00:31:29.470518  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:29.470827  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:29.970750  525066 type.go:168] "Request Body" body=""
	I1212 00:31:29.970827  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:29.971160  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:29.971218  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:30.471043  525066 type.go:168] "Request Body" body=""
	I1212 00:31:30.471109  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:30.471376  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:30.971171  525066 type.go:168] "Request Body" body=""
	I1212 00:31:30.971241  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:30.971550  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:31.471358  525066 type.go:168] "Request Body" body=""
	I1212 00:31:31.471445  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:31.471839  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:31.970419  525066 type.go:168] "Request Body" body=""
	I1212 00:31:31.970484  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:31.970752  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:32.470476  525066 type.go:168] "Request Body" body=""
	I1212 00:31:32.470545  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:32.470896  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:32.470960  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:32.970646  525066 type.go:168] "Request Body" body=""
	I1212 00:31:32.970737  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:32.971068  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:33.470394  525066 type.go:168] "Request Body" body=""
	I1212 00:31:33.470464  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:33.470730  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:33.970441  525066 type.go:168] "Request Body" body=""
	I1212 00:31:33.970512  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:33.970887  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:34.470452  525066 type.go:168] "Request Body" body=""
	I1212 00:31:34.470528  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:34.471050  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:34.471101  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:34.971076  525066 type.go:168] "Request Body" body=""
	I1212 00:31:34.971150  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:34.971412  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:35.471345  525066 type.go:168] "Request Body" body=""
	I1212 00:31:35.471417  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:35.471701  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:35.970416  525066 type.go:168] "Request Body" body=""
	I1212 00:31:35.970491  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:35.970794  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:36.470413  525066 type.go:168] "Request Body" body=""
	I1212 00:31:36.470483  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:36.470801  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:36.970488  525066 type.go:168] "Request Body" body=""
	I1212 00:31:36.970578  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:36.970944  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:36.970998  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:37.470498  525066 type.go:168] "Request Body" body=""
	I1212 00:31:37.470572  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:37.470867  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:37.970414  525066 type.go:168] "Request Body" body=""
	I1212 00:31:37.970484  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:37.970840  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:38.470560  525066 type.go:168] "Request Body" body=""
	I1212 00:31:38.470640  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:38.470997  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:38.970458  525066 type.go:168] "Request Body" body=""
	I1212 00:31:38.970542  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:38.970875  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:39.470418  525066 type.go:168] "Request Body" body=""
	I1212 00:31:39.470483  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:39.470746  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:39.470792  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:39.970766  525066 type.go:168] "Request Body" body=""
	I1212 00:31:39.970840  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:39.971186  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:40.470676  525066 type.go:168] "Request Body" body=""
	I1212 00:31:40.470773  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:40.471187  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:40.970410  525066 type.go:168] "Request Body" body=""
	I1212 00:31:40.970498  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:40.970933  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:41.470798  525066 type.go:168] "Request Body" body=""
	I1212 00:31:41.470881  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:41.471270  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:41.471324  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:41.971113  525066 type.go:168] "Request Body" body=""
	I1212 00:31:41.971189  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:41.971540  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:42.471293  525066 type.go:168] "Request Body" body=""
	I1212 00:31:42.471364  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:42.471623  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:42.971377  525066 type.go:168] "Request Body" body=""
	I1212 00:31:42.971447  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:42.971777  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:43.470484  525066 type.go:168] "Request Body" body=""
	I1212 00:31:43.470559  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:43.470898  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:43.970565  525066 type.go:168] "Request Body" body=""
	I1212 00:31:43.970635  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:43.970946  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:43.970995  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:44.470714  525066 type.go:168] "Request Body" body=""
	I1212 00:31:44.470786  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:44.471067  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:44.970961  525066 type.go:168] "Request Body" body=""
	I1212 00:31:44.971037  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:44.971349  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:45.471086  525066 type.go:168] "Request Body" body=""
	I1212 00:31:45.471160  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:45.471425  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:45.971255  525066 type.go:168] "Request Body" body=""
	I1212 00:31:45.971369  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:45.971732  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:45.971788  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:46.470449  525066 type.go:168] "Request Body" body=""
	I1212 00:31:46.470522  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:46.470852  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:46.970385  525066 type.go:168] "Request Body" body=""
	I1212 00:31:46.970452  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:46.970722  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:47.470437  525066 type.go:168] "Request Body" body=""
	I1212 00:31:47.470516  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:47.471054  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:47.970787  525066 type.go:168] "Request Body" body=""
	I1212 00:31:47.970859  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:47.971237  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:48.470392  525066 type.go:168] "Request Body" body=""
	I1212 00:31:48.470462  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:48.470738  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:48.470782  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:48.970447  525066 type.go:168] "Request Body" body=""
	I1212 00:31:48.970518  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:48.970858  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:49.470546  525066 type.go:168] "Request Body" body=""
	I1212 00:31:49.470624  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:49.470988  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:49.970873  525066 type.go:168] "Request Body" body=""
	I1212 00:31:49.970945  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:49.971253  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:50.471037  525066 type.go:168] "Request Body" body=""
	I1212 00:31:50.471108  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:50.471396  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:50.471445  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:50.971208  525066 type.go:168] "Request Body" body=""
	I1212 00:31:50.971282  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:50.971603  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:51.471213  525066 type.go:168] "Request Body" body=""
	I1212 00:31:51.471279  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:51.471540  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:51.971366  525066 type.go:168] "Request Body" body=""
	I1212 00:31:51.971439  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:51.971745  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:52.470473  525066 type.go:168] "Request Body" body=""
	I1212 00:31:52.470565  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:52.470989  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:52.970405  525066 type.go:168] "Request Body" body=""
	I1212 00:31:52.970478  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:52.970761  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:52.970811  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:53.470446  525066 type.go:168] "Request Body" body=""
	I1212 00:31:53.470521  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:53.470872  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:53.970497  525066 type.go:168] "Request Body" body=""
	I1212 00:31:53.970576  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:53.970925  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:54.470437  525066 type.go:168] "Request Body" body=""
	I1212 00:31:54.470505  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:54.470810  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:54.970825  525066 type.go:168] "Request Body" body=""
	I1212 00:31:54.970901  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:54.971247  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:54.971305  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:55.471027  525066 type.go:168] "Request Body" body=""
	I1212 00:31:55.471109  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:55.471438  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:55.971085  525066 type.go:168] "Request Body" body=""
	I1212 00:31:55.971149  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:55.971395  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:56.471224  525066 type.go:168] "Request Body" body=""
	I1212 00:31:56.471307  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:56.471633  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:56.970393  525066 type.go:168] "Request Body" body=""
	I1212 00:31:56.970474  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:56.970791  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:57.470420  525066 type.go:168] "Request Body" body=""
	I1212 00:31:57.470487  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:57.470757  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:57.470801  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:57.970446  525066 type.go:168] "Request Body" body=""
	I1212 00:31:57.970526  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:57.970833  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:58.470459  525066 type.go:168] "Request Body" body=""
	I1212 00:31:58.470532  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:58.470885  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:58.970572  525066 type.go:168] "Request Body" body=""
	I1212 00:31:58.970646  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:58.970920  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:31:59.470458  525066 type.go:168] "Request Body" body=""
	I1212 00:31:59.470548  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:59.470889  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:31:59.470948  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:31:59.970930  525066 type.go:168] "Request Body" body=""
	I1212 00:31:59.971026  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:31:59.971389  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:00.470941  525066 type.go:168] "Request Body" body=""
	I1212 00:32:00.471069  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:00.471359  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:00.971229  525066 type.go:168] "Request Body" body=""
	I1212 00:32:00.971307  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:00.971647  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:01.470368  525066 type.go:168] "Request Body" body=""
	I1212 00:32:01.470445  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:01.470795  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:01.970448  525066 type.go:168] "Request Body" body=""
	I1212 00:32:01.970521  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:01.970816  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:01.970862  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:02.470549  525066 type.go:168] "Request Body" body=""
	I1212 00:32:02.470624  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:02.470998  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:02.970773  525066 type.go:168] "Request Body" body=""
	I1212 00:32:02.970858  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:02.971312  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:03.471100  525066 type.go:168] "Request Body" body=""
	I1212 00:32:03.471172  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:03.471473  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:03.971278  525066 type.go:168] "Request Body" body=""
	I1212 00:32:03.971356  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:03.971686  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:03.971737  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:04.470407  525066 type.go:168] "Request Body" body=""
	I1212 00:32:04.470491  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:04.470843  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:04.970701  525066 type.go:168] "Request Body" body=""
	I1212 00:32:04.970771  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:04.971049  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:05.470747  525066 type.go:168] "Request Body" body=""
	I1212 00:32:05.470824  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:05.471189  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:05.970759  525066 type.go:168] "Request Body" body=""
	I1212 00:32:05.970838  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:05.971177  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:06.470915  525066 type.go:168] "Request Body" body=""
	I1212 00:32:06.470997  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:06.471253  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:06.471294  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:06.971057  525066 type.go:168] "Request Body" body=""
	I1212 00:32:06.971134  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:06.971488  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:07.471269  525066 type.go:168] "Request Body" body=""
	I1212 00:32:07.471344  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:07.471670  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:07.970352  525066 type.go:168] "Request Body" body=""
	I1212 00:32:07.970421  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:07.970747  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:08.470438  525066 type.go:168] "Request Body" body=""
	I1212 00:32:08.470509  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:08.470878  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:08.970449  525066 type.go:168] "Request Body" body=""
	I1212 00:32:08.970521  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:08.970871  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:08.970925  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:09.470391  525066 type.go:168] "Request Body" body=""
	I1212 00:32:09.470470  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:09.470756  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:09.970703  525066 type.go:168] "Request Body" body=""
	I1212 00:32:09.970779  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:09.971116  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:10.471033  525066 type.go:168] "Request Body" body=""
	I1212 00:32:10.471109  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:10.471417  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:10.971169  525066 type.go:168] "Request Body" body=""
	I1212 00:32:10.971238  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:10.971496  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:10.971539  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:11.471372  525066 type.go:168] "Request Body" body=""
	I1212 00:32:11.471451  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:11.471770  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:11.970469  525066 type.go:168] "Request Body" body=""
	I1212 00:32:11.970586  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:11.970898  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:12.470383  525066 type.go:168] "Request Body" body=""
	I1212 00:32:12.470453  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:12.470788  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:12.970473  525066 type.go:168] "Request Body" body=""
	I1212 00:32:12.970545  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:12.970889  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:13.470464  525066 type.go:168] "Request Body" body=""
	I1212 00:32:13.470555  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:13.470934  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:13.470994  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:13.970655  525066 type.go:168] "Request Body" body=""
	I1212 00:32:13.970754  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:13.971092  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:14.470457  525066 type.go:168] "Request Body" body=""
	I1212 00:32:14.470538  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:14.470903  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:14.970794  525066 type.go:168] "Request Body" body=""
	I1212 00:32:14.970878  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:14.971205  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:15.470971  525066 type.go:168] "Request Body" body=""
	I1212 00:32:15.471055  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:15.471372  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:15.471414  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:15.971237  525066 type.go:168] "Request Body" body=""
	I1212 00:32:15.971320  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:15.971640  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:16.470370  525066 type.go:168] "Request Body" body=""
	I1212 00:32:16.470445  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:16.470782  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:16.970410  525066 type.go:168] "Request Body" body=""
	I1212 00:32:16.970484  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:16.970823  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:17.470439  525066 type.go:168] "Request Body" body=""
	I1212 00:32:17.470517  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:17.470864  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:17.970590  525066 type.go:168] "Request Body" body=""
	I1212 00:32:17.970664  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:17.971024  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:17.971078  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:18.470738  525066 type.go:168] "Request Body" body=""
	I1212 00:32:18.470805  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:18.471184  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:18.971020  525066 type.go:168] "Request Body" body=""
	I1212 00:32:18.971105  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:18.971458  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:19.471116  525066 type.go:168] "Request Body" body=""
	I1212 00:32:19.471188  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:19.471515  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:19.970337  525066 type.go:168] "Request Body" body=""
	I1212 00:32:19.970412  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:19.970828  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:20.471213  525066 type.go:168] "Request Body" body=""
	I1212 00:32:20.471293  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:20.471629  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:20.471692  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:20.970398  525066 type.go:168] "Request Body" body=""
	I1212 00:32:20.970472  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:20.970846  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:21.470529  525066 type.go:168] "Request Body" body=""
	I1212 00:32:21.470596  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:21.470869  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:21.970458  525066 type.go:168] "Request Body" body=""
	I1212 00:32:21.970531  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:21.970901  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:22.470489  525066 type.go:168] "Request Body" body=""
	I1212 00:32:22.470578  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:22.470923  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:22.970628  525066 type.go:168] "Request Body" body=""
	I1212 00:32:22.970719  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:22.970989  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:22.971032  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:23.470448  525066 type.go:168] "Request Body" body=""
	I1212 00:32:23.470535  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:23.470894  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:23.970458  525066 type.go:168] "Request Body" body=""
	I1212 00:32:23.970535  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:23.970896  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:24.470332  525066 type.go:168] "Request Body" body=""
	I1212 00:32:24.470407  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:24.470716  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:24.970747  525066 type.go:168] "Request Body" body=""
	I1212 00:32:24.970820  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:24.971165  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:24.971223  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:25.471031  525066 type.go:168] "Request Body" body=""
	I1212 00:32:25.471128  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:25.471490  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:25.971208  525066 type.go:168] "Request Body" body=""
	I1212 00:32:25.971275  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:25.971541  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:26.471290  525066 type.go:168] "Request Body" body=""
	I1212 00:32:26.471368  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:26.471700  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:26.970438  525066 type.go:168] "Request Body" body=""
	I1212 00:32:26.970517  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:26.970866  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:27.470424  525066 type.go:168] "Request Body" body=""
	I1212 00:32:27.470499  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:27.470866  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:27.470914  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:27.970455  525066 type.go:168] "Request Body" body=""
	I1212 00:32:27.970540  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:27.970913  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:28.470649  525066 type.go:168] "Request Body" body=""
	I1212 00:32:28.470738  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:28.471036  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:28.970429  525066 type.go:168] "Request Body" body=""
	I1212 00:32:28.970500  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:28.970820  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:29.470446  525066 type.go:168] "Request Body" body=""
	I1212 00:32:29.470523  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:29.470857  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:29.970808  525066 type.go:168] "Request Body" body=""
	I1212 00:32:29.970884  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:29.971262  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:29.971318  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:30.471202  525066 type.go:168] "Request Body" body=""
	I1212 00:32:30.471270  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:30.471570  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:30.971249  525066 type.go:168] "Request Body" body=""
	I1212 00:32:30.971333  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:30.971675  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:31.470379  525066 type.go:168] "Request Body" body=""
	I1212 00:32:31.470456  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:31.470795  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:31.970429  525066 type.go:168] "Request Body" body=""
	I1212 00:32:31.970500  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:31.970830  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:32.470463  525066 type.go:168] "Request Body" body=""
	I1212 00:32:32.470546  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:32.470916  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:32.470969  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:32.970507  525066 type.go:168] "Request Body" body=""
	I1212 00:32:32.970586  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:32.970962  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:33.470645  525066 type.go:168] "Request Body" body=""
	I1212 00:32:33.470725  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:33.471027  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:33.970436  525066 type.go:168] "Request Body" body=""
	I1212 00:32:33.970515  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:33.970871  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:34.470554  525066 type.go:168] "Request Body" body=""
	I1212 00:32:34.470623  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:34.470962  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:34.471031  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:34.970790  525066 type.go:168] "Request Body" body=""
	I1212 00:32:34.970868  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:34.971195  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:35.470980  525066 type.go:168] "Request Body" body=""
	I1212 00:32:35.471052  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:35.471397  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:35.971176  525066 type.go:168] "Request Body" body=""
	I1212 00:32:35.971249  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:35.971579  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:36.471345  525066 type.go:168] "Request Body" body=""
	I1212 00:32:36.471420  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:36.471668  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:36.471709  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:36.970372  525066 type.go:168] "Request Body" body=""
	I1212 00:32:36.970448  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:36.970769  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:37.470433  525066 type.go:168] "Request Body" body=""
	I1212 00:32:37.470509  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:37.470827  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:37.970393  525066 type.go:168] "Request Body" body=""
	I1212 00:32:37.970466  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:37.970813  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:38.470425  525066 type.go:168] "Request Body" body=""
	I1212 00:32:38.470496  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:38.470842  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:38.970540  525066 type.go:168] "Request Body" body=""
	I1212 00:32:38.970620  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:38.970960  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:38.971034  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:39.470693  525066 type.go:168] "Request Body" body=""
	I1212 00:32:39.470760  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:39.471016  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:39.970946  525066 type.go:168] "Request Body" body=""
	I1212 00:32:39.971029  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:39.971356  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:40.470492  525066 type.go:168] "Request Body" body=""
	I1212 00:32:40.470569  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:40.470930  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:40.970610  525066 type.go:168] "Request Body" body=""
	I1212 00:32:40.970675  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:40.970950  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:41.470442  525066 type.go:168] "Request Body" body=""
	I1212 00:32:41.470517  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:41.470854  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:41.470914  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:41.970457  525066 type.go:168] "Request Body" body=""
	I1212 00:32:41.970529  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:41.970877  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:42.470405  525066 type.go:168] "Request Body" body=""
	I1212 00:32:42.470478  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:42.470764  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:42.970423  525066 type.go:168] "Request Body" body=""
	I1212 00:32:42.970491  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:42.970824  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:43.470427  525066 type.go:168] "Request Body" body=""
	I1212 00:32:43.470499  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:43.470854  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:43.970397  525066 type.go:168] "Request Body" body=""
	I1212 00:32:43.970461  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:43.970746  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:43.970790  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:44.470423  525066 type.go:168] "Request Body" body=""
	I1212 00:32:44.470501  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:44.470842  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:44.970773  525066 type.go:168] "Request Body" body=""
	I1212 00:32:44.970846  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:44.971174  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:45.470779  525066 type.go:168] "Request Body" body=""
	I1212 00:32:45.470852  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:45.471113  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:45.970429  525066 type.go:168] "Request Body" body=""
	I1212 00:32:45.970512  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:45.970857  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:45.970913  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:46.470599  525066 type.go:168] "Request Body" body=""
	I1212 00:32:46.470698  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:46.471036  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:46.970724  525066 type.go:168] "Request Body" body=""
	I1212 00:32:46.970811  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:46.971101  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:47.470415  525066 type.go:168] "Request Body" body=""
	I1212 00:32:47.470487  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:47.470829  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:47.970458  525066 type.go:168] "Request Body" body=""
	I1212 00:32:47.970529  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:47.970877  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:47.970934  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:48.470399  525066 type.go:168] "Request Body" body=""
	I1212 00:32:48.470468  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:48.470756  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:48.970465  525066 type.go:168] "Request Body" body=""
	I1212 00:32:48.970543  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:48.970922  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:49.470651  525066 type.go:168] "Request Body" body=""
	I1212 00:32:49.470742  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:49.471098  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:49.970876  525066 type.go:168] "Request Body" body=""
	I1212 00:32:49.970959  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:49.971229  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:49.971270  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:50.471244  525066 type.go:168] "Request Body" body=""
	I1212 00:32:50.471322  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:50.471670  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:50.970390  525066 type.go:168] "Request Body" body=""
	I1212 00:32:50.970471  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:50.970817  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:51.470505  525066 type.go:168] "Request Body" body=""
	I1212 00:32:51.470577  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:51.470954  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:51.970430  525066 type.go:168] "Request Body" body=""
	I1212 00:32:51.970500  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:51.970854  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:52.470564  525066 type.go:168] "Request Body" body=""
	I1212 00:32:52.470637  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:52.471003  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:52.471056  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:52.970380  525066 type.go:168] "Request Body" body=""
	I1212 00:32:52.970448  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:52.970779  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:53.470481  525066 type.go:168] "Request Body" body=""
	I1212 00:32:53.470554  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:53.470926  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:53.970647  525066 type.go:168] "Request Body" body=""
	I1212 00:32:53.970737  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:53.971090  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:54.471319  525066 type.go:168] "Request Body" body=""
	I1212 00:32:54.471392  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:54.471642  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:54.471682  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:54.970626  525066 type.go:168] "Request Body" body=""
	I1212 00:32:54.970705  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:54.971020  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:55.470451  525066 type.go:168] "Request Body" body=""
	I1212 00:32:55.470522  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:55.470854  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:55.970384  525066 type.go:168] "Request Body" body=""
	I1212 00:32:55.970461  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:55.970755  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:56.470437  525066 type.go:168] "Request Body" body=""
	I1212 00:32:56.470521  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:56.470867  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:56.970577  525066 type.go:168] "Request Body" body=""
	I1212 00:32:56.970650  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:56.971023  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:56.971077  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:57.470742  525066 type.go:168] "Request Body" body=""
	I1212 00:32:57.470815  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:57.471167  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:57.970872  525066 type.go:168] "Request Body" body=""
	I1212 00:32:57.970953  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:57.971280  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:58.471062  525066 type.go:168] "Request Body" body=""
	I1212 00:32:58.471134  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:58.471462  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:58.971255  525066 type.go:168] "Request Body" body=""
	I1212 00:32:58.971322  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:58.971578  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:32:58.971620  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:32:59.471341  525066 type.go:168] "Request Body" body=""
	I1212 00:32:59.471410  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:59.471735  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:32:59.970614  525066 type.go:168] "Request Body" body=""
	I1212 00:32:59.970716  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:32:59.971048  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:00.470331  525066 type.go:168] "Request Body" body=""
	I1212 00:33:00.470413  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:00.470671  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:00.970395  525066 type.go:168] "Request Body" body=""
	I1212 00:33:00.970485  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:00.970885  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:01.470464  525066 type.go:168] "Request Body" body=""
	I1212 00:33:01.470537  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:01.470879  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:01.470943  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:01.970438  525066 type.go:168] "Request Body" body=""
	I1212 00:33:01.970506  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:01.970852  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:02.470619  525066 type.go:168] "Request Body" body=""
	I1212 00:33:02.470714  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:02.471075  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:02.970791  525066 type.go:168] "Request Body" body=""
	I1212 00:33:02.970863  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:02.971208  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:03.470951  525066 type.go:168] "Request Body" body=""
	I1212 00:33:03.471027  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:03.471358  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:03.471426  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:03.971137  525066 type.go:168] "Request Body" body=""
	I1212 00:33:03.971208  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:03.971542  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:04.471345  525066 type.go:168] "Request Body" body=""
	I1212 00:33:04.471415  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:04.471746  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:04.970402  525066 type.go:168] "Request Body" body=""
	I1212 00:33:04.970479  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:04.970766  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:05.470439  525066 type.go:168] "Request Body" body=""
	I1212 00:33:05.470511  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:05.470849  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:05.970564  525066 type.go:168] "Request Body" body=""
	I1212 00:33:05.970637  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:05.970984  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:05.971040  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:06.470424  525066 type.go:168] "Request Body" body=""
	I1212 00:33:06.470502  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:06.470781  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:06.970465  525066 type.go:168] "Request Body" body=""
	I1212 00:33:06.970547  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:06.970898  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:07.470461  525066 type.go:168] "Request Body" body=""
	I1212 00:33:07.470546  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:07.470897  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:07.970572  525066 type.go:168] "Request Body" body=""
	I1212 00:33:07.970648  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:07.970982  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:08.470661  525066 type.go:168] "Request Body" body=""
	I1212 00:33:08.470757  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:08.471101  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:08.471155  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:08.970834  525066 type.go:168] "Request Body" body=""
	I1212 00:33:08.970915  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:08.971261  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:09.471007  525066 type.go:168] "Request Body" body=""
	I1212 00:33:09.471080  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:09.471383  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:09.971222  525066 type.go:168] "Request Body" body=""
	I1212 00:33:09.971292  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:09.971613  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:10.470464  525066 type.go:168] "Request Body" body=""
	I1212 00:33:10.470536  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:10.470871  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:10.970418  525066 type.go:168] "Request Body" body=""
	I1212 00:33:10.970483  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:10.970802  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:10.970858  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:11.470473  525066 type.go:168] "Request Body" body=""
	I1212 00:33:11.470545  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:11.470900  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:11.970438  525066 type.go:168] "Request Body" body=""
	I1212 00:33:11.970517  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:11.970868  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:12.470560  525066 type.go:168] "Request Body" body=""
	I1212 00:33:12.470632  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:12.470928  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:12.970459  525066 type.go:168] "Request Body" body=""
	I1212 00:33:12.970538  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:12.970885  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:12.970943  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:13.470422  525066 type.go:168] "Request Body" body=""
	I1212 00:33:13.470501  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:13.470840  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:13.970429  525066 type.go:168] "Request Body" body=""
	I1212 00:33:13.970498  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:13.970796  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:14.470442  525066 type.go:168] "Request Body" body=""
	I1212 00:33:14.470515  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:14.470869  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:14.970766  525066 type.go:168] "Request Body" body=""
	I1212 00:33:14.970842  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:14.971184  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:14.971238  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:15.470940  525066 type.go:168] "Request Body" body=""
	I1212 00:33:15.471011  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:15.471271  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:15.971043  525066 type.go:168] "Request Body" body=""
	I1212 00:33:15.971115  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:15.971480  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:16.471281  525066 type.go:168] "Request Body" body=""
	I1212 00:33:16.471357  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:16.471735  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:16.970410  525066 type.go:168] "Request Body" body=""
	I1212 00:33:16.970477  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:16.970775  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:17.470446  525066 type.go:168] "Request Body" body=""
	I1212 00:33:17.470524  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:17.470842  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:17.470887  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:17.970551  525066 type.go:168] "Request Body" body=""
	I1212 00:33:17.970623  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:17.970977  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:18.470481  525066 type.go:168] "Request Body" body=""
	I1212 00:33:18.470557  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:18.470882  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:18.970446  525066 type.go:168] "Request Body" body=""
	I1212 00:33:18.970516  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:18.970881  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:19.470588  525066 type.go:168] "Request Body" body=""
	I1212 00:33:19.470670  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:19.471039  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:19.471096  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:19.970916  525066 type.go:168] "Request Body" body=""
	I1212 00:33:19.971010  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:19.971330  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:20.471261  525066 type.go:168] "Request Body" body=""
	I1212 00:33:20.471340  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:20.471686  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:20.970404  525066 type.go:168] "Request Body" body=""
	I1212 00:33:20.970481  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:20.970812  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:21.470406  525066 type.go:168] "Request Body" body=""
	I1212 00:33:21.470472  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:21.470744  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:21.970441  525066 type.go:168] "Request Body" body=""
	I1212 00:33:21.970514  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:21.970842  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:21.970902  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:22.470458  525066 type.go:168] "Request Body" body=""
	I1212 00:33:22.470541  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:22.470874  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:22.970430  525066 type.go:168] "Request Body" body=""
	I1212 00:33:22.970499  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:22.970829  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:23.470435  525066 type.go:168] "Request Body" body=""
	I1212 00:33:23.470504  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:23.470830  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:23.970480  525066 type.go:168] "Request Body" body=""
	I1212 00:33:23.970555  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:23.970905  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:23.970965  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:24.470433  525066 type.go:168] "Request Body" body=""
	I1212 00:33:24.470506  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:24.470783  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:24.970803  525066 type.go:168] "Request Body" body=""
	I1212 00:33:24.970886  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:24.971251  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:25.471119  525066 type.go:168] "Request Body" body=""
	I1212 00:33:25.471192  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:25.471497  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:25.971209  525066 type.go:168] "Request Body" body=""
	I1212 00:33:25.971285  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:25.971580  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:25.971621  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:26.470369  525066 type.go:168] "Request Body" body=""
	I1212 00:33:26.470445  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:26.470776  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:26.970409  525066 type.go:168] "Request Body" body=""
	I1212 00:33:26.970481  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:26.970791  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:27.470405  525066 type.go:168] "Request Body" body=""
	I1212 00:33:27.470473  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:27.470753  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:27.970463  525066 type.go:168] "Request Body" body=""
	I1212 00:33:27.970537  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:27.970900  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:28.470443  525066 type.go:168] "Request Body" body=""
	I1212 00:33:28.470515  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:28.470860  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:28.470912  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:28.970541  525066 type.go:168] "Request Body" body=""
	I1212 00:33:28.970608  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:28.970887  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:29.470489  525066 type.go:168] "Request Body" body=""
	I1212 00:33:29.470563  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:29.470897  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:29.970848  525066 type.go:168] "Request Body" body=""
	I1212 00:33:29.970931  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:29.971286  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:30.470592  525066 type.go:168] "Request Body" body=""
	I1212 00:33:30.470659  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:30.470963  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:30.471010  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:30.970450  525066 type.go:168] "Request Body" body=""
	I1212 00:33:30.970524  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:30.970868  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:31.470588  525066 type.go:168] "Request Body" body=""
	I1212 00:33:31.470666  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:31.471003  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:31.970391  525066 type.go:168] "Request Body" body=""
	I1212 00:33:31.970461  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:31.970788  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:32.470477  525066 type.go:168] "Request Body" body=""
	I1212 00:33:32.470550  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:32.470900  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:32.970483  525066 type.go:168] "Request Body" body=""
	I1212 00:33:32.970557  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:32.970910  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:32.970970  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:33.470618  525066 type.go:168] "Request Body" body=""
	I1212 00:33:33.470712  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:33.470974  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:33.970444  525066 type.go:168] "Request Body" body=""
	I1212 00:33:33.970517  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:33.970882  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:34.470466  525066 type.go:168] "Request Body" body=""
	I1212 00:33:34.470544  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:34.470888  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:34.974811  525066 type.go:168] "Request Body" body=""
	I1212 00:33:34.974888  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:34.975210  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:34.975263  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:35.470393  525066 type.go:168] "Request Body" body=""
	I1212 00:33:35.470475  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:35.470774  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:35.970447  525066 type.go:168] "Request Body" body=""
	I1212 00:33:35.970520  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:35.970921  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:36.470630  525066 type.go:168] "Request Body" body=""
	I1212 00:33:36.470714  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:36.470977  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:36.970447  525066 type.go:168] "Request Body" body=""
	I1212 00:33:36.970519  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:36.970841  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:37.470461  525066 type.go:168] "Request Body" body=""
	I1212 00:33:37.470545  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:37.470921  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:37.470982  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:37.970408  525066 type.go:168] "Request Body" body=""
	I1212 00:33:37.970475  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:37.970748  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:38.470525  525066 type.go:168] "Request Body" body=""
	I1212 00:33:38.470598  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:38.470966  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:38.970662  525066 type.go:168] "Request Body" body=""
	I1212 00:33:38.970757  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:38.971088  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:39.470794  525066 type.go:168] "Request Body" body=""
	I1212 00:33:39.470866  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:39.471135  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:39.471185  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:39.971164  525066 type.go:168] "Request Body" body=""
	I1212 00:33:39.971246  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:39.971584  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:40.470516  525066 type.go:168] "Request Body" body=""
	I1212 00:33:40.470591  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:40.470924  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:40.970415  525066 type.go:168] "Request Body" body=""
	I1212 00:33:40.970485  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:40.971060  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:41.470451  525066 type.go:168] "Request Body" body=""
	I1212 00:33:41.470587  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:41.470946  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:41.970903  525066 type.go:168] "Request Body" body=""
	I1212 00:33:41.970980  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:41.971291  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:41.971344  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:42.471094  525066 type.go:168] "Request Body" body=""
	I1212 00:33:42.471178  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:42.471572  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:42.971249  525066 type.go:168] "Request Body" body=""
	I1212 00:33:42.971320  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:42.971651  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:43.470400  525066 type.go:168] "Request Body" body=""
	I1212 00:33:43.470476  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:43.470790  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:43.970434  525066 type.go:168] "Request Body" body=""
	I1212 00:33:43.970503  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:43.970849  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:44.470442  525066 type.go:168] "Request Body" body=""
	I1212 00:33:44.470512  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:44.470828  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:44.470882  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:44.970788  525066 type.go:168] "Request Body" body=""
	I1212 00:33:44.970861  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:44.971179  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:45.471027  525066 type.go:168] "Request Body" body=""
	I1212 00:33:45.471095  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:45.471384  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:45.971175  525066 type.go:168] "Request Body" body=""
	I1212 00:33:45.971254  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:45.971602  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:46.471390  525066 type.go:168] "Request Body" body=""
	I1212 00:33:46.471469  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:46.471785  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:46.471843  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:46.970475  525066 type.go:168] "Request Body" body=""
	I1212 00:33:46.970546  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:46.970906  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:47.470431  525066 type.go:168] "Request Body" body=""
	I1212 00:33:47.470507  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:47.470868  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:47.970663  525066 type.go:168] "Request Body" body=""
	I1212 00:33:47.970759  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:47.971097  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:48.470801  525066 type.go:168] "Request Body" body=""
	I1212 00:33:48.470875  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:48.471136  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:48.970447  525066 type.go:168] "Request Body" body=""
	I1212 00:33:48.970518  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:48.970883  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:48.970943  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:49.470484  525066 type.go:168] "Request Body" body=""
	I1212 00:33:49.470558  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:49.470901  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:49.970711  525066 type.go:168] "Request Body" body=""
	I1212 00:33:49.970783  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:49.971100  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:50.471121  525066 type.go:168] "Request Body" body=""
	I1212 00:33:50.471191  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:50.471492  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:50.971285  525066 type.go:168] "Request Body" body=""
	I1212 00:33:50.971368  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:50.971704  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:50.971758  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:51.470388  525066 type.go:168] "Request Body" body=""
	I1212 00:33:51.470461  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:51.470790  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:51.970459  525066 type.go:168] "Request Body" body=""
	I1212 00:33:51.970536  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:51.970878  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:52.470599  525066 type.go:168] "Request Body" body=""
	I1212 00:33:52.470668  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:52.471040  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:52.970519  525066 type.go:168] "Request Body" body=""
	I1212 00:33:52.970595  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:52.970943  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:53.470479  525066 type.go:168] "Request Body" body=""
	I1212 00:33:53.470557  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:53.470898  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:53.470998  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:53.970658  525066 type.go:168] "Request Body" body=""
	I1212 00:33:53.970752  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:53.971087  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:54.470382  525066 type.go:168] "Request Body" body=""
	I1212 00:33:54.470475  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:54.470745  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:54.971392  525066 type.go:168] "Request Body" body=""
	I1212 00:33:54.971460  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:54.971785  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:55.470484  525066 type.go:168] "Request Body" body=""
	I1212 00:33:55.470559  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:55.470901  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:55.970438  525066 type.go:168] "Request Body" body=""
	I1212 00:33:55.970506  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:55.970773  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:55.970819  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:56.470532  525066 type.go:168] "Request Body" body=""
	I1212 00:33:56.470620  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:56.470993  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:56.970746  525066 type.go:168] "Request Body" body=""
	I1212 00:33:56.970823  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:56.971164  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:57.470798  525066 type.go:168] "Request Body" body=""
	I1212 00:33:57.470887  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:57.471201  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:57.970983  525066 type.go:168] "Request Body" body=""
	I1212 00:33:57.971057  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:57.971379  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:33:57.971435  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:33:58.471288  525066 type.go:168] "Request Body" body=""
	I1212 00:33:58.471374  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:58.471710  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:58.970417  525066 type.go:168] "Request Body" body=""
	I1212 00:33:58.970485  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:58.970786  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:59.470436  525066 type.go:168] "Request Body" body=""
	I1212 00:33:59.470537  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:59.470835  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:33:59.970792  525066 type.go:168] "Request Body" body=""
	I1212 00:33:59.970864  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:33:59.971190  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:00.471100  525066 type.go:168] "Request Body" body=""
	I1212 00:34:00.471178  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:00.471446  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:00.471491  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:00.971317  525066 type.go:168] "Request Body" body=""
	I1212 00:34:00.971392  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:00.971704  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:01.470405  525066 type.go:168] "Request Body" body=""
	I1212 00:34:01.470478  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:01.470824  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:01.970499  525066 type.go:168] "Request Body" body=""
	I1212 00:34:01.970587  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:01.970878  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:02.470450  525066 type.go:168] "Request Body" body=""
	I1212 00:34:02.470548  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:02.470875  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:02.970486  525066 type.go:168] "Request Body" body=""
	I1212 00:34:02.970558  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:02.970903  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:02.970960  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:03.470466  525066 type.go:168] "Request Body" body=""
	I1212 00:34:03.470543  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:03.470886  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:03.970447  525066 type.go:168] "Request Body" body=""
	I1212 00:34:03.970517  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:03.970835  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:04.470526  525066 type.go:168] "Request Body" body=""
	I1212 00:34:04.470601  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:04.470936  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:04.970883  525066 type.go:168] "Request Body" body=""
	I1212 00:34:04.970968  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:04.971228  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:04.971278  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:05.471031  525066 type.go:168] "Request Body" body=""
	I1212 00:34:05.471105  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:05.471416  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:05.971158  525066 type.go:168] "Request Body" body=""
	I1212 00:34:05.971232  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:05.971554  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:06.471186  525066 type.go:168] "Request Body" body=""
	I1212 00:34:06.471254  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:06.471579  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:06.971380  525066 type.go:168] "Request Body" body=""
	I1212 00:34:06.971454  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:06.971795  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:06.971845  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:07.470453  525066 type.go:168] "Request Body" body=""
	I1212 00:34:07.470532  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:07.470891  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:07.970570  525066 type.go:168] "Request Body" body=""
	I1212 00:34:07.970640  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:07.970974  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:08.470454  525066 type.go:168] "Request Body" body=""
	I1212 00:34:08.470526  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:08.470855  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:08.970451  525066 type.go:168] "Request Body" body=""
	I1212 00:34:08.970523  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:08.970873  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:09.470384  525066 type.go:168] "Request Body" body=""
	I1212 00:34:09.470463  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:09.470759  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:09.470810  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:09.970476  525066 type.go:168] "Request Body" body=""
	I1212 00:34:09.970548  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:09.970914  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:10.470339  525066 type.go:168] "Request Body" body=""
	I1212 00:34:10.470412  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:10.470749  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:10.970418  525066 type.go:168] "Request Body" body=""
	I1212 00:34:10.970489  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:10.970837  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:11.470433  525066 type.go:168] "Request Body" body=""
	I1212 00:34:11.470511  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:11.470863  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:11.470914  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:11.970593  525066 type.go:168] "Request Body" body=""
	I1212 00:34:11.970672  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:11.971074  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:12.470513  525066 type.go:168] "Request Body" body=""
	I1212 00:34:12.470580  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:12.470938  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:12.970456  525066 type.go:168] "Request Body" body=""
	I1212 00:34:12.970537  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:12.970882  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:13.470574  525066 type.go:168] "Request Body" body=""
	I1212 00:34:13.470650  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:13.471032  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:13.471090  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:13.970419  525066 type.go:168] "Request Body" body=""
	I1212 00:34:13.970498  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:13.970791  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:14.470474  525066 type.go:168] "Request Body" body=""
	I1212 00:34:14.470543  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:14.470845  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:14.970750  525066 type.go:168] "Request Body" body=""
	I1212 00:34:14.970824  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:14.971143  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:15.470788  525066 type.go:168] "Request Body" body=""
	I1212 00:34:15.470856  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:15.471125  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:15.471166  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:15.970468  525066 type.go:168] "Request Body" body=""
	I1212 00:34:15.970542  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:15.970859  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:16.470484  525066 type.go:168] "Request Body" body=""
	I1212 00:34:16.470555  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:16.470882  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:16.970423  525066 type.go:168] "Request Body" body=""
	I1212 00:34:16.970496  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:16.970812  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:17.470487  525066 type.go:168] "Request Body" body=""
	I1212 00:34:17.470558  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:17.470975  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:17.970713  525066 type.go:168] "Request Body" body=""
	I1212 00:34:17.970796  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:17.971146  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:17.971201  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:18.470794  525066 type.go:168] "Request Body" body=""
	I1212 00:34:18.470865  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:18.471131  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:18.970459  525066 type.go:168] "Request Body" body=""
	I1212 00:34:18.970539  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:18.970892  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:19.470617  525066 type.go:168] "Request Body" body=""
	I1212 00:34:19.470700  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:19.471001  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:19.970879  525066 type.go:168] "Request Body" body=""
	I1212 00:34:19.970960  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:19.971231  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:19.971282  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:20.470446  525066 type.go:168] "Request Body" body=""
	I1212 00:34:20.470523  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:20.470854  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:20.970561  525066 type.go:168] "Request Body" body=""
	I1212 00:34:20.970634  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:20.970964  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:21.470548  525066 type.go:168] "Request Body" body=""
	I1212 00:34:21.470625  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:21.470970  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:21.970472  525066 type.go:168] "Request Body" body=""
	I1212 00:34:21.970545  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:21.970883  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:22.470450  525066 type.go:168] "Request Body" body=""
	I1212 00:34:22.470526  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:22.470863  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:22.470920  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:22.970409  525066 type.go:168] "Request Body" body=""
	I1212 00:34:22.970483  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:22.970826  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:23.470427  525066 type.go:168] "Request Body" body=""
	I1212 00:34:23.470503  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:23.470827  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:23.970523  525066 type.go:168] "Request Body" body=""
	I1212 00:34:23.970600  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:23.970934  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:24.470590  525066 type.go:168] "Request Body" body=""
	I1212 00:34:24.470656  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:24.470938  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:24.470979  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:24.971061  525066 type.go:168] "Request Body" body=""
	I1212 00:34:24.971133  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:24.971465  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:25.471283  525066 type.go:168] "Request Body" body=""
	I1212 00:34:25.471358  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:25.471678  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:25.970368  525066 type.go:168] "Request Body" body=""
	I1212 00:34:25.970434  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:25.970734  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:26.470429  525066 type.go:168] "Request Body" body=""
	I1212 00:34:26.470509  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:26.470892  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:26.970602  525066 type.go:168] "Request Body" body=""
	I1212 00:34:26.970714  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:26.971045  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:26.971097  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:27.470596  525066 type.go:168] "Request Body" body=""
	I1212 00:34:27.470664  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:27.470983  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:27.970426  525066 type.go:168] "Request Body" body=""
	I1212 00:34:27.970519  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:27.970812  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:28.470515  525066 type.go:168] "Request Body" body=""
	I1212 00:34:28.470605  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:28.470981  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:28.970522  525066 type.go:168] "Request Body" body=""
	I1212 00:34:28.970588  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:28.970857  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:29.470468  525066 type.go:168] "Request Body" body=""
	I1212 00:34:29.470544  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:29.470864  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:29.470919  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:29.970798  525066 type.go:168] "Request Body" body=""
	I1212 00:34:29.970869  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:29.971213  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:30.471137  525066 type.go:168] "Request Body" body=""
	I1212 00:34:30.471225  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:30.471550  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:30.971357  525066 type.go:168] "Request Body" body=""
	I1212 00:34:30.971431  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:30.971742  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:31.470472  525066 type.go:168] "Request Body" body=""
	I1212 00:34:31.470560  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:31.470967  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:31.471019  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:31.970675  525066 type.go:168] "Request Body" body=""
	I1212 00:34:31.970764  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:31.971052  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:32.470453  525066 type.go:168] "Request Body" body=""
	I1212 00:34:32.470527  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:32.470874  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:32.970593  525066 type.go:168] "Request Body" body=""
	I1212 00:34:32.970672  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:32.971032  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:33.470510  525066 type.go:168] "Request Body" body=""
	I1212 00:34:33.470583  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:33.470858  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:33.970552  525066 type.go:168] "Request Body" body=""
	I1212 00:34:33.970631  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:33.970999  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:33.971054  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:34.470581  525066 type.go:168] "Request Body" body=""
	I1212 00:34:34.470663  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:34.471029  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:34.970856  525066 type.go:168] "Request Body" body=""
	I1212 00:34:34.970934  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:34.971203  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:35.470956  525066 type.go:168] "Request Body" body=""
	I1212 00:34:35.471029  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:35.471364  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:35.971153  525066 type.go:168] "Request Body" body=""
	I1212 00:34:35.971231  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:35.971540  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:35.971597  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:36.471328  525066 type.go:168] "Request Body" body=""
	I1212 00:34:36.471400  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:36.471724  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:36.970408  525066 type.go:168] "Request Body" body=""
	I1212 00:34:36.970487  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:36.970849  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:37.470460  525066 type.go:168] "Request Body" body=""
	I1212 00:34:37.470535  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:37.470898  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:37.970425  525066 type.go:168] "Request Body" body=""
	I1212 00:34:37.970536  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:37.970843  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:38.470574  525066 type.go:168] "Request Body" body=""
	I1212 00:34:38.470652  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:38.470997  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:38.471051  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:38.970762  525066 type.go:168] "Request Body" body=""
	I1212 00:34:38.970837  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:38.971165  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:39.470821  525066 type.go:168] "Request Body" body=""
	I1212 00:34:39.470925  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:39.471276  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:39.971132  525066 type.go:168] "Request Body" body=""
	I1212 00:34:39.971208  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:39.971504  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:40.470403  525066 type.go:168] "Request Body" body=""
	I1212 00:34:40.470483  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:40.470859  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:40.970437  525066 type.go:168] "Request Body" body=""
	I1212 00:34:40.970509  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:40.970780  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:40.970828  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:41.470472  525066 type.go:168] "Request Body" body=""
	I1212 00:34:41.470541  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:41.471156  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:41.970992  525066 type.go:168] "Request Body" body=""
	I1212 00:34:41.971063  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:41.971400  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:42.471111  525066 type.go:168] "Request Body" body=""
	I1212 00:34:42.471182  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:42.471437  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:42.971249  525066 type.go:168] "Request Body" body=""
	I1212 00:34:42.971318  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:42.971637  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:42.971693  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:43.470373  525066 type.go:168] "Request Body" body=""
	I1212 00:34:43.470452  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:43.470770  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:43.970406  525066 type.go:168] "Request Body" body=""
	I1212 00:34:43.970484  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:43.970813  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:44.470444  525066 type.go:168] "Request Body" body=""
	I1212 00:34:44.470522  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:44.470871  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:44.970718  525066 type.go:168] "Request Body" body=""
	I1212 00:34:44.970796  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:44.971129  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:45.470791  525066 type.go:168] "Request Body" body=""
	I1212 00:34:45.470857  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:45.471157  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:45.471208  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:45.971058  525066 type.go:168] "Request Body" body=""
	I1212 00:34:45.971144  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:45.971575  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:46.471360  525066 type.go:168] "Request Body" body=""
	I1212 00:34:46.471430  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:46.471804  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:46.970483  525066 type.go:168] "Request Body" body=""
	I1212 00:34:46.970548  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:46.970854  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:47.470504  525066 type.go:168] "Request Body" body=""
	I1212 00:34:47.470579  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:47.470919  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:47.970616  525066 type.go:168] "Request Body" body=""
	I1212 00:34:47.970715  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:47.971061  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:47.971117  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:48.470377  525066 type.go:168] "Request Body" body=""
	I1212 00:34:48.470445  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:48.470744  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:48.970454  525066 type.go:168] "Request Body" body=""
	I1212 00:34:48.970525  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:48.970871  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:49.470443  525066 type.go:168] "Request Body" body=""
	I1212 00:34:49.470517  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:49.470842  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:49.970842  525066 type.go:168] "Request Body" body=""
	I1212 00:34:49.970921  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:49.971185  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:49.971235  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:50.471289  525066 type.go:168] "Request Body" body=""
	I1212 00:34:50.471363  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:50.471683  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:50.970385  525066 type.go:168] "Request Body" body=""
	I1212 00:34:50.970470  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:50.970820  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:51.470498  525066 type.go:168] "Request Body" body=""
	I1212 00:34:51.470577  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:51.470901  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:51.970643  525066 type.go:168] "Request Body" body=""
	I1212 00:34:51.970737  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:51.971081  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:52.470467  525066 type.go:168] "Request Body" body=""
	I1212 00:34:52.470543  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:52.470852  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:52.470906  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:52.970347  525066 type.go:168] "Request Body" body=""
	I1212 00:34:52.970413  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:52.970656  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:53.470361  525066 type.go:168] "Request Body" body=""
	I1212 00:34:53.470433  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:53.470758  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:53.970445  525066 type.go:168] "Request Body" body=""
	I1212 00:34:53.970521  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:53.970846  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:54.470385  525066 type.go:168] "Request Body" body=""
	I1212 00:34:54.470456  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:54.470728  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:54.970731  525066 type.go:168] "Request Body" body=""
	I1212 00:34:54.970808  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:54.971141  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:54.971194  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:55.470962  525066 type.go:168] "Request Body" body=""
	I1212 00:34:55.471032  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:55.471384  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:55.971164  525066 type.go:168] "Request Body" body=""
	I1212 00:34:55.971235  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:55.971578  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:56.471360  525066 type.go:168] "Request Body" body=""
	I1212 00:34:56.471429  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:56.471744  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:56.970433  525066 type.go:168] "Request Body" body=""
	I1212 00:34:56.970514  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:56.970841  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:57.470322  525066 type.go:168] "Request Body" body=""
	I1212 00:34:57.470393  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:57.470705  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:57.470754  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:57.970446  525066 type.go:168] "Request Body" body=""
	I1212 00:34:57.970517  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:57.970859  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:58.470584  525066 type.go:168] "Request Body" body=""
	I1212 00:34:58.470658  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:58.470977  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:58.970390  525066 type.go:168] "Request Body" body=""
	I1212 00:34:58.970459  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:58.970753  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:34:59.470435  525066 type.go:168] "Request Body" body=""
	I1212 00:34:59.470506  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:59.470847  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:34:59.470910  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:34:59.970877  525066 type.go:168] "Request Body" body=""
	I1212 00:34:59.970974  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:34:59.971302  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:00.471231  525066 type.go:168] "Request Body" body=""
	I1212 00:35:00.471314  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:00.471616  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:00.970336  525066 type.go:168] "Request Body" body=""
	I1212 00:35:00.970416  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:00.970781  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:01.470511  525066 type.go:168] "Request Body" body=""
	I1212 00:35:01.470600  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:01.470948  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:35:01.471002  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:35:01.970437  525066 type.go:168] "Request Body" body=""
	I1212 00:35:01.970523  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:01.970847  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:02.470535  525066 type.go:168] "Request Body" body=""
	I1212 00:35:02.470612  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:02.470975  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:02.970668  525066 type.go:168] "Request Body" body=""
	I1212 00:35:02.970763  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:02.971094  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:03.470416  525066 type.go:168] "Request Body" body=""
	I1212 00:35:03.470482  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:03.470744  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:03.970472  525066 type.go:168] "Request Body" body=""
	I1212 00:35:03.970553  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:03.970942  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:35:03.971000  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:35:04.470451  525066 type.go:168] "Request Body" body=""
	I1212 00:35:04.470558  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:04.470919  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:04.970804  525066 type.go:168] "Request Body" body=""
	I1212 00:35:04.970871  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:04.971144  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:05.470467  525066 type.go:168] "Request Body" body=""
	I1212 00:35:05.470537  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:05.470942  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:05.970499  525066 type.go:168] "Request Body" body=""
	I1212 00:35:05.970585  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:05.970938  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:06.470332  525066 type.go:168] "Request Body" body=""
	I1212 00:35:06.470408  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:06.470730  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:35:06.470781  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:35:06.970454  525066 type.go:168] "Request Body" body=""
	I1212 00:35:06.970525  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:06.970921  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:07.470487  525066 type.go:168] "Request Body" body=""
	I1212 00:35:07.470570  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:07.470917  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:07.970583  525066 type.go:168] "Request Body" body=""
	I1212 00:35:07.970653  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:07.970985  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:08.470669  525066 type.go:168] "Request Body" body=""
	I1212 00:35:08.470768  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:08.471111  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:35:08.471164  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:35:08.970674  525066 type.go:168] "Request Body" body=""
	I1212 00:35:08.970770  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:08.971117  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:09.470791  525066 type.go:168] "Request Body" body=""
	I1212 00:35:09.470928  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:09.471187  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:09.971168  525066 type.go:168] "Request Body" body=""
	I1212 00:35:09.971242  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:09.971558  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:10.470457  525066 type.go:168] "Request Body" body=""
	I1212 00:35:10.470566  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:10.470928  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:10.970414  525066 type.go:168] "Request Body" body=""
	I1212 00:35:10.970483  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:10.970810  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:35:10.970861  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:35:11.470561  525066 type.go:168] "Request Body" body=""
	I1212 00:35:11.470643  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:11.471038  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:11.970830  525066 type.go:168] "Request Body" body=""
	I1212 00:35:11.970907  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:11.971183  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:12.470991  525066 type.go:168] "Request Body" body=""
	I1212 00:35:12.471059  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:12.471390  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:12.971182  525066 type.go:168] "Request Body" body=""
	I1212 00:35:12.971260  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:12.971601  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:35:12.971680  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:35:13.471284  525066 type.go:168] "Request Body" body=""
	I1212 00:35:13.471356  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:13.471730  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:13.970398  525066 type.go:168] "Request Body" body=""
	I1212 00:35:13.970477  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:13.970795  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:14.470449  525066 type.go:168] "Request Body" body=""
	I1212 00:35:14.470529  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:14.470838  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:14.970781  525066 type.go:168] "Request Body" body=""
	I1212 00:35:14.970875  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:14.971268  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:15.471033  525066 type.go:168] "Request Body" body=""
	I1212 00:35:15.471104  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:15.471367  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:35:15.471407  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:35:15.971146  525066 type.go:168] "Request Body" body=""
	I1212 00:35:15.971216  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:15.971526  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:16.471298  525066 type.go:168] "Request Body" body=""
	I1212 00:35:16.471376  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:16.471748  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:16.970401  525066 type.go:168] "Request Body" body=""
	I1212 00:35:16.970468  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:16.970761  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:17.470431  525066 type.go:168] "Request Body" body=""
	I1212 00:35:17.470501  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:17.470854  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:17.970433  525066 type.go:168] "Request Body" body=""
	I1212 00:35:17.970504  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:17.970880  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:35:17.970936  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:35:18.470421  525066 type.go:168] "Request Body" body=""
	I1212 00:35:18.470491  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:18.470768  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:18.970447  525066 type.go:168] "Request Body" body=""
	I1212 00:35:18.970526  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:18.970872  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:19.470463  525066 type.go:168] "Request Body" body=""
	I1212 00:35:19.470546  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:19.470905  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:19.970857  525066 type.go:168] "Request Body" body=""
	I1212 00:35:19.970930  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:19.971189  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:35:19.971235  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:35:20.471222  525066 type.go:168] "Request Body" body=""
	I1212 00:35:20.471296  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:20.471592  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:20.971375  525066 type.go:168] "Request Body" body=""
	I1212 00:35:20.971447  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:20.971753  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:21.470418  525066 type.go:168] "Request Body" body=""
	I1212 00:35:21.470490  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:21.470805  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:21.970405  525066 type.go:168] "Request Body" body=""
	I1212 00:35:21.970484  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:21.970793  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:22.470400  525066 type.go:168] "Request Body" body=""
	I1212 00:35:22.470486  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:22.470834  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:35:22.470893  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:35:22.970432  525066 type.go:168] "Request Body" body=""
	I1212 00:35:22.970507  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:22.970864  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:23.470610  525066 type.go:168] "Request Body" body=""
	I1212 00:35:23.470694  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:23.471022  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:23.970472  525066 type.go:168] "Request Body" body=""
	I1212 00:35:23.970544  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:23.970934  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:24.470526  525066 type.go:168] "Request Body" body=""
	I1212 00:35:24.470602  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:24.470885  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:35:24.470937  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:35:24.970814  525066 type.go:168] "Request Body" body=""
	I1212 00:35:24.970900  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:24.971212  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:25.470981  525066 type.go:168] "Request Body" body=""
	I1212 00:35:25.471083  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:25.471412  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:25.971186  525066 type.go:168] "Request Body" body=""
	I1212 00:35:25.971270  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:25.971542  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:26.471296  525066 type.go:168] "Request Body" body=""
	I1212 00:35:26.471372  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:26.471691  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:35:26.471748  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:35:26.970425  525066 type.go:168] "Request Body" body=""
	I1212 00:35:26.970494  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:26.970788  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:27.470473  525066 type.go:168] "Request Body" body=""
	I1212 00:35:27.470545  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:27.470900  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:27.970608  525066 type.go:168] "Request Body" body=""
	I1212 00:35:27.970694  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:27.971049  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:28.470775  525066 type.go:168] "Request Body" body=""
	I1212 00:35:28.470856  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:28.471187  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:28.970958  525066 type.go:168] "Request Body" body=""
	I1212 00:35:28.971022  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:28.971277  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:35:28.971316  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:35:29.471162  525066 type.go:168] "Request Body" body=""
	I1212 00:35:29.471240  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:29.471593  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:29.970376  525066 type.go:168] "Request Body" body=""
	I1212 00:35:29.970454  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:29.970816  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:30.471109  525066 type.go:168] "Request Body" body=""
	I1212 00:35:30.471183  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:30.471480  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:30.971287  525066 type.go:168] "Request Body" body=""
	I1212 00:35:30.971360  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:30.971672  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:35:30.971729  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:35:31.470405  525066 type.go:168] "Request Body" body=""
	I1212 00:35:31.470485  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:31.470830  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:31.970549  525066 type.go:168] "Request Body" body=""
	I1212 00:35:31.970619  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:31.970957  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:32.470649  525066 type.go:168] "Request Body" body=""
	I1212 00:35:32.470745  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:32.471093  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:32.970460  525066 type.go:168] "Request Body" body=""
	I1212 00:35:32.970533  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:32.970861  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:33.470443  525066 type.go:168] "Request Body" body=""
	I1212 00:35:33.470511  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:33.470783  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:35:33.470825  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:35:33.970454  525066 type.go:168] "Request Body" body=""
	I1212 00:35:33.970535  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:33.970883  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:34.470595  525066 type.go:168] "Request Body" body=""
	I1212 00:35:34.470673  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:34.471021  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:34.970778  525066 type.go:168] "Request Body" body=""
	I1212 00:35:34.970845  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:34.971108  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:35.470789  525066 type.go:168] "Request Body" body=""
	I1212 00:35:35.470893  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:35.471408  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:35:35.471455  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:35:35.971178  525066 type.go:168] "Request Body" body=""
	I1212 00:35:35.971249  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:35.971545  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:36.471287  525066 type.go:168] "Request Body" body=""
	I1212 00:35:36.471358  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:36.471623  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:36.970386  525066 type.go:168] "Request Body" body=""
	I1212 00:35:36.970468  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:36.970815  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:37.470527  525066 type.go:168] "Request Body" body=""
	I1212 00:35:37.470612  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:37.470950  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:37.970440  525066 type.go:168] "Request Body" body=""
	I1212 00:35:37.970510  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:37.970824  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:35:37.970880  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:35:38.470422  525066 type.go:168] "Request Body" body=""
	I1212 00:35:38.470503  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:38.470828  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:38.970459  525066 type.go:168] "Request Body" body=""
	I1212 00:35:38.970529  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:38.970885  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:39.470567  525066 type.go:168] "Request Body" body=""
	I1212 00:35:39.470634  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:39.470915  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:39.971016  525066 type.go:168] "Request Body" body=""
	I1212 00:35:39.971090  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:39.971449  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:35:39.971507  525066 node_ready.go:55] error getting node "functional-035643" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-035643": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:35:40.470458  525066 type.go:168] "Request Body" body=""
	I1212 00:35:40.470535  525066 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-035643" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:35:40.470907  525066 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:35:40.970388  525066 type.go:168] "Request Body" body=""
	I1212 00:35:40.970449  525066 node_ready.go:38] duration metric: took 6m0.000230679s for node "functional-035643" to be "Ready" ...
	I1212 00:35:40.973928  525066 out.go:203] 
	W1212 00:35:40.976747  525066 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1212 00:35:40.976773  525066 out.go:285] * 
	W1212 00:35:40.981440  525066 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 00:35:40.984739  525066 out.go:203] 
	
	
	==> CRI-O <==
	Dec 12 00:35:50 functional-035643 crio[5335]: time="2025-12-12T00:35:50.093738425Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=306252a9-50d1-4cf7-879f-649314fb6779 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:35:51 functional-035643 crio[5335]: time="2025-12-12T00:35:51.166350422Z" level=info msg="Checking image status: minikube-local-cache-test:functional-035643" id=6ec98bc0-0fe5-4d00-8dbc-0c76e49800c9 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:35:51 functional-035643 crio[5335]: time="2025-12-12T00:35:51.166547981Z" level=info msg="Resolving \"minikube-local-cache-test\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 12 00:35:51 functional-035643 crio[5335]: time="2025-12-12T00:35:51.166597826Z" level=info msg="Image minikube-local-cache-test:functional-035643 not found" id=6ec98bc0-0fe5-4d00-8dbc-0c76e49800c9 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:35:51 functional-035643 crio[5335]: time="2025-12-12T00:35:51.166674804Z" level=info msg="Neither image nor artfiact minikube-local-cache-test:functional-035643 found" id=6ec98bc0-0fe5-4d00-8dbc-0c76e49800c9 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:35:51 functional-035643 crio[5335]: time="2025-12-12T00:35:51.191658974Z" level=info msg="Checking image status: docker.io/library/minikube-local-cache-test:functional-035643" id=2cec4339-ccdf-4bb9-bbfe-6be8600e4cfb name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:35:51 functional-035643 crio[5335]: time="2025-12-12T00:35:51.19180233Z" level=info msg="Image docker.io/library/minikube-local-cache-test:functional-035643 not found" id=2cec4339-ccdf-4bb9-bbfe-6be8600e4cfb name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:35:51 functional-035643 crio[5335]: time="2025-12-12T00:35:51.191844077Z" level=info msg="Neither image nor artfiact docker.io/library/minikube-local-cache-test:functional-035643 found" id=2cec4339-ccdf-4bb9-bbfe-6be8600e4cfb name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:35:51 functional-035643 crio[5335]: time="2025-12-12T00:35:51.217264929Z" level=info msg="Checking image status: localhost/library/minikube-local-cache-test:functional-035643" id=37843a35-a46e-4c4e-8f5c-a022b5d36fb4 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:35:51 functional-035643 crio[5335]: time="2025-12-12T00:35:51.217422563Z" level=info msg="Image localhost/library/minikube-local-cache-test:functional-035643 not found" id=37843a35-a46e-4c4e-8f5c-a022b5d36fb4 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:35:51 functional-035643 crio[5335]: time="2025-12-12T00:35:51.217481129Z" level=info msg="Neither image nor artfiact localhost/library/minikube-local-cache-test:functional-035643 found" id=37843a35-a46e-4c4e-8f5c-a022b5d36fb4 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:35:52 functional-035643 crio[5335]: time="2025-12-12T00:35:52.176503978Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=0e38953a-955f-4bff-9e4f-43d0bbe4bbce name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:35:52 functional-035643 crio[5335]: time="2025-12-12T00:35:52.504101247Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=ea64e7ef-dc23-4ef6-bc3a-61a9d8f40935 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:35:52 functional-035643 crio[5335]: time="2025-12-12T00:35:52.50424252Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=ea64e7ef-dc23-4ef6-bc3a-61a9d8f40935 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:35:52 functional-035643 crio[5335]: time="2025-12-12T00:35:52.504277284Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=ea64e7ef-dc23-4ef6-bc3a-61a9d8f40935 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:35:53 functional-035643 crio[5335]: time="2025-12-12T00:35:53.112644556Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=4dea7078-8f6a-4566-b66f-f279f8eb3817 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:35:53 functional-035643 crio[5335]: time="2025-12-12T00:35:53.112790915Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=4dea7078-8f6a-4566-b66f-f279f8eb3817 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:35:53 functional-035643 crio[5335]: time="2025-12-12T00:35:53.112826672Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=4dea7078-8f6a-4566-b66f-f279f8eb3817 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:35:53 functional-035643 crio[5335]: time="2025-12-12T00:35:53.135996574Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=fbf6d603-4b1c-4b27-a3bc-c9cd9d3d9669 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:35:53 functional-035643 crio[5335]: time="2025-12-12T00:35:53.136166318Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=fbf6d603-4b1c-4b27-a3bc-c9cd9d3d9669 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:35:53 functional-035643 crio[5335]: time="2025-12-12T00:35:53.136218017Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=fbf6d603-4b1c-4b27-a3bc-c9cd9d3d9669 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:35:53 functional-035643 crio[5335]: time="2025-12-12T00:35:53.161115157Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=9c700902-8781-4f47-a05e-d73425f0903f name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:35:53 functional-035643 crio[5335]: time="2025-12-12T00:35:53.161266193Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=9c700902-8781-4f47-a05e-d73425f0903f name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:35:53 functional-035643 crio[5335]: time="2025-12-12T00:35:53.161316358Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=9c700902-8781-4f47-a05e-d73425f0903f name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:35:53 functional-035643 crio[5335]: time="2025-12-12T00:35:53.688624008Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=95959e99-0c5e-4a6b-b332-e8c760445a5d name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:35:57.589143    9504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:35:57.589911    9504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:35:57.591464    9504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:35:57.592045    9504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:35:57.593725    9504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec11 23:45] hrtimer: interrupt took 13740716 ns
	[Dec12 00:10] kauditd_printk_skb: 8 callbacks suppressed
	[Dec12 00:11] overlayfs: idmapped layers are currently not supported
	[  +0.124336] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec12 00:17] overlayfs: idmapped layers are currently not supported
	[Dec12 00:18] overlayfs: idmapped layers are currently not supported
	[Dec12 00:35] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 00:35:57 up  3:18,  0 user,  load average: 0.31, 0.32, 0.79
	Linux functional-035643 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 12 00:35:55 functional-035643 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 00:35:55 functional-035643 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1158.
	Dec 12 00:35:55 functional-035643 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:35:55 functional-035643 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:35:56 functional-035643 kubelet[9376]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 00:35:56 functional-035643 kubelet[9376]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 00:35:56 functional-035643 kubelet[9376]: E1212 00:35:56.040648    9376 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 00:35:56 functional-035643 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 00:35:56 functional-035643 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 00:35:56 functional-035643 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1159.
	Dec 12 00:35:56 functional-035643 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:35:56 functional-035643 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:35:56 functional-035643 kubelet[9412]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 00:35:56 functional-035643 kubelet[9412]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 00:35:56 functional-035643 kubelet[9412]: E1212 00:35:56.797718    9412 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 00:35:56 functional-035643 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 00:35:56 functional-035643 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 00:35:57 functional-035643 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1160.
	Dec 12 00:35:57 functional-035643 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:35:57 functional-035643 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:35:57 functional-035643 kubelet[9487]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 00:35:57 functional-035643 kubelet[9487]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 00:35:57 functional-035643 kubelet[9487]: E1212 00:35:57.531451    9487 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 00:35:57 functional-035643 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 00:35:57 functional-035643 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-035643 -n functional-035643
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-035643 -n functional-035643: exit status 2 (329.77723ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-035643" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (2.45s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (734.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-035643 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1212 00:38:33.592974  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/addons-199484/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:40:17.604942  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-921447/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:41:40.674226  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-921447/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:43:33.594793  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/addons-199484/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:45:17.605100  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-921447/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-035643 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 109 (12m12.093011626s)

                                                
                                                
-- stdout --
	* [functional-035643] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22101
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22101-487723/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-487723/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-035643" primary control-plane node in "functional-035643" cluster
	* Pulling base image v0.0.48-1765275396-22083 ...
	* Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	  - apiserver.enable-admission-plugins=NamespaceAutoProvision
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001882345s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000091689s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000091689s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
functional_test.go:774: failed to restart minikube. args "out/minikube-linux-arm64 start -p functional-035643 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 109
functional_test.go:776: restart took 12m12.094187291s for "functional-035643" cluster.
I1212 00:48:10.729814  490954 config.go:182] Loaded profile config "functional-035643": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-035643
helpers_test.go:244: (dbg) docker inspect functional-035643:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "02b8c8e636a5f9c9930bd279101be257c50eb00805c3c8fd0285e10206ff115a",
	        "Created": "2025-12-12T00:21:16.539894649Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 519641,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T00:21:16.600605162Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/02b8c8e636a5f9c9930bd279101be257c50eb00805c3c8fd0285e10206ff115a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/02b8c8e636a5f9c9930bd279101be257c50eb00805c3c8fd0285e10206ff115a/hostname",
	        "HostsPath": "/var/lib/docker/containers/02b8c8e636a5f9c9930bd279101be257c50eb00805c3c8fd0285e10206ff115a/hosts",
	        "LogPath": "/var/lib/docker/containers/02b8c8e636a5f9c9930bd279101be257c50eb00805c3c8fd0285e10206ff115a/02b8c8e636a5f9c9930bd279101be257c50eb00805c3c8fd0285e10206ff115a-json.log",
	        "Name": "/functional-035643",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-035643:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-035643",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "02b8c8e636a5f9c9930bd279101be257c50eb00805c3c8fd0285e10206ff115a",
	                "LowerDir": "/var/lib/docker/overlay2/fac4f7ad025123b29aaa4f718217dd9d72b542fc9949b8afc2ffc326afc38542-init/diff:/var/lib/docker/overlay2/312acdcca8c5c90ada236fa0dd866f841348e5b8485928af37d3628cccc20197/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fac4f7ad025123b29aaa4f718217dd9d72b542fc9949b8afc2ffc326afc38542/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fac4f7ad025123b29aaa4f718217dd9d72b542fc9949b8afc2ffc326afc38542/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fac4f7ad025123b29aaa4f718217dd9d72b542fc9949b8afc2ffc326afc38542/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-035643",
	                "Source": "/var/lib/docker/volumes/functional-035643/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-035643",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-035643",
	                "name.minikube.sigs.k8s.io": "functional-035643",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ede6a17442d6bf83b8f4c9f93f252345cec3d0406f82de2d6bd2cfd4713e2163",
	            "SandboxKey": "/var/run/docker/netns/ede6a17442d6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33183"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33184"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33187"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33185"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33186"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-035643": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "52:d5:12:89:ea:40",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ad01995b183fdebead6c725e2b942ae8ce2d3964b3552789fe5b50ee7e7239a3",
	                    "EndpointID": "d429a1cd0f840d042af4ad7ea0bda6067a342be7fb552083411004a3604b0124",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-035643",
	                        "02b8c8e636a5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-035643 -n functional-035643
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-035643 -n functional-035643: exit status 2 (317.66255ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-035643 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-921447 image ls --format short --alsologtostderr                                                                                       │ functional-921447 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ image   │ functional-921447 image ls --format json --alsologtostderr                                                                                        │ functional-921447 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ ssh     │ functional-921447 ssh pgrep buildkitd                                                                                                             │ functional-921447 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │                     │
	│ image   │ functional-921447 image ls --format table --alsologtostderr                                                                                       │ functional-921447 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ image   │ functional-921447 image build -t localhost/my-image:functional-921447 testdata/build --alsologtostderr                                            │ functional-921447 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ image   │ functional-921447 image ls                                                                                                                        │ functional-921447 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ delete  │ -p functional-921447                                                                                                                              │ functional-921447 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ start   │ -p functional-035643 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │                     │
	│ start   │ -p functional-035643 --alsologtostderr -v=8                                                                                                       │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:29 UTC │                     │
	│ cache   │ functional-035643 cache add registry.k8s.io/pause:3.1                                                                                             │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ cache   │ functional-035643 cache add registry.k8s.io/pause:3.3                                                                                             │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ cache   │ functional-035643 cache add registry.k8s.io/pause:latest                                                                                          │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ cache   │ functional-035643 cache add minikube-local-cache-test:functional-035643                                                                           │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ cache   │ functional-035643 cache delete minikube-local-cache-test:functional-035643                                                                        │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ cache   │ list                                                                                                                                              │ minikube          │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ ssh     │ functional-035643 ssh sudo crictl images                                                                                                          │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ ssh     │ functional-035643 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ ssh     │ functional-035643 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │                     │
	│ cache   │ functional-035643 cache reload                                                                                                                    │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ ssh     │ functional-035643 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                               │ minikube          │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ kubectl │ functional-035643 kubectl -- --context functional-035643 get pods                                                                                 │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │                     │
	│ start   │ -p functional-035643 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                          │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 00:35:58
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 00:35:58.676999  530956 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:35:58.677109  530956 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:35:58.677113  530956 out.go:374] Setting ErrFile to fd 2...
	I1212 00:35:58.677117  530956 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:35:58.677347  530956 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-487723/.minikube/bin
	I1212 00:35:58.677686  530956 out.go:368] Setting JSON to false
	I1212 00:35:58.678525  530956 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":11904,"bootTime":1765487855,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1212 00:35:58.678585  530956 start.go:143] virtualization:  
	I1212 00:35:58.682116  530956 out.go:179] * [functional-035643] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 00:35:58.686138  530956 out.go:179]   - MINIKUBE_LOCATION=22101
	I1212 00:35:58.686257  530956 notify.go:221] Checking for updates...
	I1212 00:35:58.691862  530956 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:35:58.694918  530956 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22101-487723/kubeconfig
	I1212 00:35:58.697806  530956 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-487723/.minikube
	I1212 00:35:58.700662  530956 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 00:35:58.703472  530956 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:35:58.706890  530956 config.go:182] Loaded profile config "functional-035643": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 00:35:58.706982  530956 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 00:35:58.735768  530956 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 00:35:58.735882  530956 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:35:58.786774  530956 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-12 00:35:58.777518712 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 00:35:58.786886  530956 docker.go:319] overlay module found
	I1212 00:35:58.790016  530956 out.go:179] * Using the docker driver based on existing profile
	I1212 00:35:58.792828  530956 start.go:309] selected driver: docker
	I1212 00:35:58.792840  530956 start.go:927] validating driver "docker" against &{Name:functional-035643 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-035643 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:35:58.792956  530956 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:35:58.793078  530956 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:35:58.848144  530956 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-12 00:35:58.839160729 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 00:35:58.848551  530956 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 00:35:58.848575  530956 cni.go:84] Creating CNI manager for ""
	I1212 00:35:58.848625  530956 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 00:35:58.848666  530956 start.go:353] cluster config:
	{Name:functional-035643 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-035643 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:35:58.851767  530956 out.go:179] * Starting "functional-035643" primary control-plane node in "functional-035643" cluster
	I1212 00:35:58.854549  530956 cache.go:134] Beginning downloading kic base image for docker with crio
	I1212 00:35:58.857426  530956 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1212 00:35:58.860284  530956 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1212 00:35:58.860323  530956 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22101-487723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1212 00:35:58.860332  530956 cache.go:65] Caching tarball of preloaded images
	I1212 00:35:58.860357  530956 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1212 00:35:58.860418  530956 preload.go:238] Found /home/jenkins/minikube-integration/22101-487723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1212 00:35:58.860426  530956 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1212 00:35:58.860536  530956 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/config.json ...
	I1212 00:35:58.879785  530956 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1212 00:35:58.879795  530956 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1212 00:35:58.879813  530956 cache.go:243] Successfully downloaded all kic artifacts
	I1212 00:35:58.879843  530956 start.go:360] acquireMachinesLock for functional-035643: {Name:mkb0cdc7d354412594dc63c0234fde00134e758d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 00:35:58.879904  530956 start.go:364] duration metric: took 45.603µs to acquireMachinesLock for "functional-035643"
	I1212 00:35:58.879924  530956 start.go:96] Skipping create...Using existing machine configuration
	I1212 00:35:58.879928  530956 fix.go:54] fixHost starting: 
	I1212 00:35:58.880192  530956 cli_runner.go:164] Run: docker container inspect functional-035643 --format={{.State.Status}}
	I1212 00:35:58.897119  530956 fix.go:112] recreateIfNeeded on functional-035643: state=Running err=<nil>
	W1212 00:35:58.897146  530956 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 00:35:58.900349  530956 out.go:252] * Updating the running docker "functional-035643" container ...
	I1212 00:35:58.900378  530956 machine.go:94] provisionDockerMachine start ...
	I1212 00:35:58.900465  530956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
	I1212 00:35:58.917663  530956 main.go:143] libmachine: Using SSH client type: native
	I1212 00:35:58.917980  530956 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1212 00:35:58.917985  530956 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 00:35:59.082110  530956 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-035643
	
	I1212 00:35:59.082124  530956 ubuntu.go:182] provisioning hostname "functional-035643"
	I1212 00:35:59.082187  530956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
	I1212 00:35:59.099710  530956 main.go:143] libmachine: Using SSH client type: native
	I1212 00:35:59.100009  530956 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1212 00:35:59.100017  530956 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-035643 && echo "functional-035643" | sudo tee /etc/hostname
	I1212 00:35:59.259555  530956 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-035643
	
	I1212 00:35:59.259640  530956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
	I1212 00:35:59.277248  530956 main.go:143] libmachine: Using SSH client type: native
	I1212 00:35:59.277556  530956 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1212 00:35:59.277570  530956 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-035643' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-035643/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-035643' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 00:35:59.427001  530956 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 00:35:59.427018  530956 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22101-487723/.minikube CaCertPath:/home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22101-487723/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22101-487723/.minikube}
	I1212 00:35:59.427041  530956 ubuntu.go:190] setting up certificates
	I1212 00:35:59.427057  530956 provision.go:84] configureAuth start
	I1212 00:35:59.427116  530956 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-035643
	I1212 00:35:59.444510  530956 provision.go:143] copyHostCerts
	I1212 00:35:59.444577  530956 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-487723/.minikube/ca.pem, removing ...
	I1212 00:35:59.444584  530956 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-487723/.minikube/ca.pem
	I1212 00:35:59.444656  530956 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22101-487723/.minikube/ca.pem (1078 bytes)
	I1212 00:35:59.444762  530956 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-487723/.minikube/cert.pem, removing ...
	I1212 00:35:59.444766  530956 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-487723/.minikube/cert.pem
	I1212 00:35:59.444790  530956 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22101-487723/.minikube/cert.pem (1123 bytes)
	I1212 00:35:59.444853  530956 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-487723/.minikube/key.pem, removing ...
	I1212 00:35:59.444856  530956 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-487723/.minikube/key.pem
	I1212 00:35:59.444879  530956 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22101-487723/.minikube/key.pem (1679 bytes)
	I1212 00:35:59.444932  530956 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22101-487723/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca-key.pem org=jenkins.functional-035643 san=[127.0.0.1 192.168.49.2 functional-035643 localhost minikube]
	I1212 00:35:59.773887  530956 provision.go:177] copyRemoteCerts
	I1212 00:35:59.773940  530956 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 00:35:59.773979  530956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
	I1212 00:35:59.792006  530956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/functional-035643/id_rsa Username:docker}
	I1212 00:35:59.898459  530956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 00:35:59.916125  530956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 00:35:59.934437  530956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 00:35:59.951804  530956 provision.go:87] duration metric: took 524.726096ms to configureAuth
	I1212 00:35:59.951820  530956 ubuntu.go:206] setting minikube options for container-runtime
	I1212 00:35:59.952018  530956 config.go:182] Loaded profile config "functional-035643": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 00:35:59.952114  530956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
	I1212 00:35:59.968939  530956 main.go:143] libmachine: Using SSH client type: native
	I1212 00:35:59.969228  530956 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1212 00:35:59.969239  530956 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 00:36:00.563754  530956 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 00:36:00.563766  530956 machine.go:97] duration metric: took 1.663381425s to provisionDockerMachine
	I1212 00:36:00.563776  530956 start.go:293] postStartSetup for "functional-035643" (driver="docker")
	I1212 00:36:00.563787  530956 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 00:36:00.563864  530956 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 00:36:00.563909  530956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
	I1212 00:36:00.587628  530956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/functional-035643/id_rsa Username:docker}
	I1212 00:36:00.694584  530956 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 00:36:00.698084  530956 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 00:36:00.698101  530956 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 00:36:00.698111  530956 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-487723/.minikube/addons for local assets ...
	I1212 00:36:00.698167  530956 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-487723/.minikube/files for local assets ...
	I1212 00:36:00.698253  530956 filesync.go:149] local asset: /home/jenkins/minikube-integration/22101-487723/.minikube/files/etc/ssl/certs/4909542.pem -> 4909542.pem in /etc/ssl/certs
	I1212 00:36:00.698337  530956 filesync.go:149] local asset: /home/jenkins/minikube-integration/22101-487723/.minikube/files/etc/test/nested/copy/490954/hosts -> hosts in /etc/test/nested/copy/490954
	I1212 00:36:00.698388  530956 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/490954
	I1212 00:36:00.706001  530956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/files/etc/ssl/certs/4909542.pem --> /etc/ssl/certs/4909542.pem (1708 bytes)
	I1212 00:36:00.723687  530956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/files/etc/test/nested/copy/490954/hosts --> /etc/test/nested/copy/490954/hosts (40 bytes)
	I1212 00:36:00.741785  530956 start.go:296] duration metric: took 177.995516ms for postStartSetup
	I1212 00:36:00.741883  530956 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 00:36:00.741922  530956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
	I1212 00:36:00.760230  530956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/functional-035643/id_rsa Username:docker}
	I1212 00:36:00.864012  530956 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 00:36:00.868713  530956 fix.go:56] duration metric: took 1.988777195s for fixHost
	I1212 00:36:00.868727  530956 start.go:83] releasing machines lock for "functional-035643", held for 1.988815594s
	I1212 00:36:00.868792  530956 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-035643
	I1212 00:36:00.885011  530956 ssh_runner.go:195] Run: cat /version.json
	I1212 00:36:00.885055  530956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
	I1212 00:36:00.885313  530956 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 00:36:00.885366  530956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
	I1212 00:36:00.906879  530956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/functional-035643/id_rsa Username:docker}
	I1212 00:36:00.908992  530956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/functional-035643/id_rsa Username:docker}
	I1212 00:36:01.113200  530956 ssh_runner.go:195] Run: systemctl --version
	I1212 00:36:01.120029  530956 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 00:36:01.159180  530956 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 00:36:01.163912  530956 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 00:36:01.163983  530956 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 00:36:01.172622  530956 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 00:36:01.172636  530956 start.go:496] detecting cgroup driver to use...
	I1212 00:36:01.172680  530956 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 00:36:01.172728  530956 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 00:36:01.189532  530956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 00:36:01.203890  530956 docker.go:218] disabling cri-docker service (if available) ...
	I1212 00:36:01.203963  530956 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 00:36:01.220816  530956 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 00:36:01.234536  530956 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 00:36:01.370158  530956 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 00:36:01.488527  530956 docker.go:234] disabling docker service ...
	I1212 00:36:01.488594  530956 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 00:36:01.503932  530956 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 00:36:01.516796  530956 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 00:36:01.637401  530956 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 00:36:01.761796  530956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 00:36:01.774534  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 00:36:01.788471  530956 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1212 00:36:01.788535  530956 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:36:01.797095  530956 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 00:36:01.797168  530956 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:36:01.806445  530956 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:36:01.815271  530956 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:36:01.824092  530956 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 00:36:01.832291  530956 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:36:01.841209  530956 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:36:01.851179  530956 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:36:01.859893  530956 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 00:36:01.867359  530956 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 00:36:01.874599  530956 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:36:01.993195  530956 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 00:36:02.173735  530956 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 00:36:02.173807  530956 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 00:36:02.177649  530956 start.go:564] Will wait 60s for crictl version
	I1212 00:36:02.177702  530956 ssh_runner.go:195] Run: which crictl
	I1212 00:36:02.181255  530956 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 00:36:02.206520  530956 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1212 00:36:02.206592  530956 ssh_runner.go:195] Run: crio --version
	I1212 00:36:02.236053  530956 ssh_runner.go:195] Run: crio --version
	I1212 00:36:02.270501  530956 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1212 00:36:02.273364  530956 cli_runner.go:164] Run: docker network inspect functional-035643 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 00:36:02.289602  530956 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1212 00:36:02.296412  530956 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1212 00:36:02.299311  530956 kubeadm.go:884] updating cluster {Name:functional-035643 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-035643 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 00:36:02.299467  530956 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1212 00:36:02.299536  530956 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:36:02.337479  530956 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 00:36:02.337493  530956 crio.go:433] Images already preloaded, skipping extraction
	I1212 00:36:02.337550  530956 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:36:02.363122  530956 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 00:36:02.363134  530956 cache_images.go:86] Images are preloaded, skipping loading
	I1212 00:36:02.363141  530956 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1212 00:36:02.363237  530956 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-035643 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-035643 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 00:36:02.363318  530956 ssh_runner.go:195] Run: crio config
	I1212 00:36:02.413513  530956 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1212 00:36:02.413532  530956 cni.go:84] Creating CNI manager for ""
	I1212 00:36:02.413540  530956 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 00:36:02.413548  530956 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 00:36:02.413569  530956 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-035643 NodeName:functional-035643 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfig
Opts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 00:36:02.413686  530956 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-035643"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 00:36:02.413753  530956 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 00:36:02.421266  530956 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 00:36:02.421324  530956 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 00:36:02.428464  530956 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1212 00:36:02.441052  530956 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1212 00:36:02.453157  530956 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2071 bytes)
	I1212 00:36:02.466066  530956 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1212 00:36:02.472532  530956 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:36:02.578480  530956 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:36:02.719058  530956 certs.go:69] Setting up /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643 for IP: 192.168.49.2
	I1212 00:36:02.719069  530956 certs.go:195] generating shared ca certs ...
	I1212 00:36:02.719086  530956 certs.go:227] acquiring lock for ca certs: {Name:mk856824cf2126fa3d2975ef18e195b6ab1234f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:36:02.719283  530956 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22101-487723/.minikube/ca.key
	I1212 00:36:02.719337  530956 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22101-487723/.minikube/proxy-client-ca.key
	I1212 00:36:02.719344  530956 certs.go:257] generating profile certs ...
	I1212 00:36:02.719449  530956 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/client.key
	I1212 00:36:02.719541  530956 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/apiserver.key.8a9a2493
	I1212 00:36:02.719585  530956 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/proxy-client.key
	I1212 00:36:02.719735  530956 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/490954.pem (1338 bytes)
	W1212 00:36:02.719767  530956 certs.go:480] ignoring /home/jenkins/minikube-integration/22101-487723/.minikube/certs/490954_empty.pem, impossibly tiny 0 bytes
	I1212 00:36:02.719779  530956 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca-key.pem (1679 bytes)
	I1212 00:36:02.719809  530956 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca.pem (1078 bytes)
	I1212 00:36:02.719833  530956 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/cert.pem (1123 bytes)
	I1212 00:36:02.719859  530956 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/key.pem (1679 bytes)
	I1212 00:36:02.719902  530956 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-487723/.minikube/files/etc/ssl/certs/4909542.pem (1708 bytes)
	I1212 00:36:02.720656  530956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 00:36:02.742914  530956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 00:36:02.761747  530956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 00:36:02.779250  530956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 00:36:02.796535  530956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1212 00:36:02.813979  530956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 00:36:02.832344  530956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 00:36:02.850165  530956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 00:36:02.867847  530956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/files/etc/ssl/certs/4909542.pem --> /usr/share/ca-certificates/4909542.pem (1708 bytes)
	I1212 00:36:02.887774  530956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 00:36:02.905148  530956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/certs/490954.pem --> /usr/share/ca-certificates/490954.pem (1338 bytes)
	I1212 00:36:02.923137  530956 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 00:36:02.936200  530956 ssh_runner.go:195] Run: openssl version
	I1212 00:36:02.943771  530956 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/490954.pem
	I1212 00:36:02.951677  530956 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/490954.pem /etc/ssl/certs/490954.pem
	I1212 00:36:02.959104  530956 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/490954.pem
	I1212 00:36:02.962881  530956 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 00:21 /usr/share/ca-certificates/490954.pem
	I1212 00:36:02.962937  530956 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/490954.pem
	I1212 00:36:03.006038  530956 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 00:36:03.014202  530956 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4909542.pem
	I1212 00:36:03.022168  530956 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4909542.pem /etc/ssl/certs/4909542.pem
	I1212 00:36:03.030174  530956 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4909542.pem
	I1212 00:36:03.033892  530956 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 00:21 /usr/share/ca-certificates/4909542.pem
	I1212 00:36:03.033949  530956 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4909542.pem
	I1212 00:36:03.075143  530956 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 00:36:03.082587  530956 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:36:03.089740  530956 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 00:36:03.097209  530956 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:36:03.100982  530956 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 00:11 /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:36:03.101039  530956 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:36:03.141961  530956 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 00:36:03.149082  530956 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 00:36:03.152710  530956 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 00:36:03.193308  530956 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 00:36:03.236349  530956 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 00:36:03.279368  530956 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 00:36:03.320758  530956 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 00:36:03.362313  530956 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 00:36:03.403564  530956 kubeadm.go:401] StartCluster: {Name:functional-035643 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-035643 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:36:03.403639  530956 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 00:36:03.403697  530956 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 00:36:03.429883  530956 cri.go:89] found id: ""
	I1212 00:36:03.429959  530956 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 00:36:03.437518  530956 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1212 00:36:03.437528  530956 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1212 00:36:03.437580  530956 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 00:36:03.444705  530956 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:36:03.445211  530956 kubeconfig.go:125] found "functional-035643" server: "https://192.168.49.2:8441"
	I1212 00:36:03.446485  530956 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 00:36:03.453928  530956 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-12 00:21:24.717912452 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-12 00:36:02.461560447 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1212 00:36:03.453947  530956 kubeadm.go:1161] stopping kube-system containers ...
	I1212 00:36:03.453959  530956 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1212 00:36:03.454013  530956 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 00:36:03.481725  530956 cri.go:89] found id: ""
	I1212 00:36:03.481784  530956 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 00:36:03.499216  530956 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 00:36:03.507872  530956 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec 12 00:25 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5640 Dec 12 00:25 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec 12 00:25 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec 12 00:25 /etc/kubernetes/scheduler.conf
	
	I1212 00:36:03.507966  530956 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1212 00:36:03.516663  530956 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1212 00:36:03.524482  530956 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:36:03.524541  530956 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 00:36:03.532121  530956 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1212 00:36:03.539690  530956 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:36:03.539749  530956 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 00:36:03.547386  530956 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1212 00:36:03.555458  530956 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:36:03.555515  530956 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 00:36:03.563050  530956 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 00:36:03.570932  530956 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 00:36:03.615951  530956 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 00:36:05.017170  530956 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.401194576s)
	I1212 00:36:05.017241  530956 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 00:36:05.218047  530956 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 00:36:05.283161  530956 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 00:36:05.326722  530956 api_server.go:52] waiting for apiserver process to appear ...
	I1212 00:36:05.326794  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:05.827661  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:06.327088  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:06.826877  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:07.327696  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:07.827369  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:08.326870  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:08.827729  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:09.327318  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:09.826994  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:10.326971  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:10.827897  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:11.327939  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:11.827793  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:12.327004  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:12.826881  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:13.327847  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:13.827583  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:14.326945  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:14.827753  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:15.327041  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:15.826900  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:16.327889  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:16.827790  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:17.327551  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:17.826998  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:18.326988  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:18.827579  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:19.326992  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:19.827594  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:20.326867  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:20.827772  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:21.327317  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:21.827925  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:22.327001  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:22.826975  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:23.326960  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:23.826929  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:24.327674  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:24.827495  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:25.326930  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:25.827519  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:26.327962  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:26.827658  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:27.327532  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:27.826969  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:28.327685  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:28.827358  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:29.327926  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:29.826958  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:30.327782  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:30.827884  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:31.327105  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:31.826992  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:32.327681  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:32.827190  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:33.327886  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:33.827647  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:34.327016  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:34.827023  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:35.326882  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:35.827613  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:36.327026  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:36.827579  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:37.327817  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:37.827703  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:38.326889  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:38.827613  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:39.326979  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:39.827741  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:40.327124  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:40.827016  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:41.327889  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:41.827782  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:42.327587  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:42.827812  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:43.327751  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:43.826992  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:44.326981  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:44.826901  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:45.327712  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:45.826871  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:46.327774  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:46.827801  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:47.326976  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:47.827799  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:48.326988  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:48.827011  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:49.326914  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:49.826960  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:50.326958  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:50.827662  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:51.326988  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:51.826946  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:52.327667  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:52.827358  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:53.327906  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:53.827656  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:54.327035  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:54.827870  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:55.327685  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:55.827299  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:56.327742  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:56.827766  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:57.327012  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:57.827860  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:58.326990  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:58.827280  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:59.327889  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:59.826878  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:00.327933  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:00.826966  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:01.327339  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:01.827905  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:02.327586  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:02.827346  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:03.326967  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:03.827912  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:04.327657  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:04.827730  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:05.326940  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:37:05.327024  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:37:05.355488  530956 cri.go:89] found id: ""
	I1212 00:37:05.355502  530956 logs.go:282] 0 containers: []
	W1212 00:37:05.355509  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:37:05.355514  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:37:05.355580  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:37:05.379984  530956 cri.go:89] found id: ""
	I1212 00:37:05.379998  530956 logs.go:282] 0 containers: []
	W1212 00:37:05.380005  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:37:05.380010  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:37:05.380068  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:37:05.404986  530956 cri.go:89] found id: ""
	I1212 00:37:05.405001  530956 logs.go:282] 0 containers: []
	W1212 00:37:05.405010  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:37:05.405015  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:37:05.405072  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:37:05.429349  530956 cri.go:89] found id: ""
	I1212 00:37:05.429363  530956 logs.go:282] 0 containers: []
	W1212 00:37:05.429370  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:37:05.429375  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:37:05.429438  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:37:05.453950  530956 cri.go:89] found id: ""
	I1212 00:37:05.453963  530956 logs.go:282] 0 containers: []
	W1212 00:37:05.453970  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:37:05.453975  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:37:05.454030  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:37:05.481105  530956 cri.go:89] found id: ""
	I1212 00:37:05.481118  530956 logs.go:282] 0 containers: []
	W1212 00:37:05.481126  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:37:05.481131  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:37:05.481188  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:37:05.506041  530956 cri.go:89] found id: ""
	I1212 00:37:05.506054  530956 logs.go:282] 0 containers: []
	W1212 00:37:05.506062  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:37:05.506069  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:37:05.506079  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:37:05.575208  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:37:05.575226  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:37:05.602842  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:37:05.602858  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:37:05.674408  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:37:05.674425  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:37:05.688466  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:37:05.688482  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:37:05.756639  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:37:05.748526   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:05.749193   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:05.750701   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:05.751299   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:05.752883   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:37:05.748526   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:05.749193   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:05.750701   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:05.751299   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:05.752883   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:37:08.256849  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:08.268489  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:37:08.268547  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:37:08.294558  530956 cri.go:89] found id: ""
	I1212 00:37:08.294571  530956 logs.go:282] 0 containers: []
	W1212 00:37:08.294578  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:37:08.294583  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:37:08.294647  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:37:08.324264  530956 cri.go:89] found id: ""
	I1212 00:37:08.324277  530956 logs.go:282] 0 containers: []
	W1212 00:37:08.324284  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:37:08.324289  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:37:08.324345  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:37:08.349672  530956 cri.go:89] found id: ""
	I1212 00:37:08.349685  530956 logs.go:282] 0 containers: []
	W1212 00:37:08.349692  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:37:08.349697  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:37:08.349755  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:37:08.375495  530956 cri.go:89] found id: ""
	I1212 00:37:08.375509  530956 logs.go:282] 0 containers: []
	W1212 00:37:08.375516  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:37:08.375521  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:37:08.375579  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:37:08.405282  530956 cri.go:89] found id: ""
	I1212 00:37:08.405305  530956 logs.go:282] 0 containers: []
	W1212 00:37:08.405312  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:37:08.405317  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:37:08.405384  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:37:08.431165  530956 cri.go:89] found id: ""
	I1212 00:37:08.431178  530956 logs.go:282] 0 containers: []
	W1212 00:37:08.431185  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:37:08.431190  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:37:08.431255  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:37:08.456458  530956 cri.go:89] found id: ""
	I1212 00:37:08.456472  530956 logs.go:282] 0 containers: []
	W1212 00:37:08.456479  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:37:08.456487  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:37:08.456498  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:37:08.470633  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:37:08.470647  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:37:08.537226  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:37:08.528672   11079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:08.529056   11079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:08.530703   11079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:08.531301   11079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:08.532944   11079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:37:08.528672   11079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:08.529056   11079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:08.530703   11079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:08.531301   11079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:08.532944   11079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:37:08.537245  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:37:08.537256  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:37:08.606512  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:37:08.606534  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:37:08.634126  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:37:08.634142  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:37:11.201712  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:11.211510  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:37:11.211571  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:37:11.249104  530956 cri.go:89] found id: ""
	I1212 00:37:11.249118  530956 logs.go:282] 0 containers: []
	W1212 00:37:11.249135  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:37:11.249141  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:37:11.249214  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:37:11.285113  530956 cri.go:89] found id: ""
	I1212 00:37:11.285132  530956 logs.go:282] 0 containers: []
	W1212 00:37:11.285143  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:37:11.285148  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:37:11.285218  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:37:11.315788  530956 cri.go:89] found id: ""
	I1212 00:37:11.315802  530956 logs.go:282] 0 containers: []
	W1212 00:37:11.315809  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:37:11.315814  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:37:11.315875  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:37:11.346544  530956 cri.go:89] found id: ""
	I1212 00:37:11.346558  530956 logs.go:282] 0 containers: []
	W1212 00:37:11.346565  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:37:11.346571  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:37:11.346629  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:37:11.376168  530956 cri.go:89] found id: ""
	I1212 00:37:11.376192  530956 logs.go:282] 0 containers: []
	W1212 00:37:11.376199  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:37:11.376205  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:37:11.376274  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:37:11.401416  530956 cri.go:89] found id: ""
	I1212 00:37:11.401430  530956 logs.go:282] 0 containers: []
	W1212 00:37:11.401437  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:37:11.401442  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:37:11.401501  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:37:11.426005  530956 cri.go:89] found id: ""
	I1212 00:37:11.426019  530956 logs.go:282] 0 containers: []
	W1212 00:37:11.426026  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:37:11.426034  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:37:11.426044  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:37:11.440817  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:37:11.440832  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:37:11.505805  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:37:11.496652   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:11.497359   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:11.499136   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:11.499679   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:11.501366   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:37:11.496652   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:11.497359   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:11.499136   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:11.499679   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:11.501366   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:37:11.505819  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:37:11.505832  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:37:11.581171  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:37:11.581192  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:37:11.614667  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:37:11.614699  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:37:14.182453  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:14.192683  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:37:14.192743  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:37:14.224011  530956 cri.go:89] found id: ""
	I1212 00:37:14.224025  530956 logs.go:282] 0 containers: []
	W1212 00:37:14.224032  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:37:14.224037  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:37:14.224097  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:37:14.253937  530956 cri.go:89] found id: ""
	I1212 00:37:14.253951  530956 logs.go:282] 0 containers: []
	W1212 00:37:14.253958  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:37:14.253963  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:37:14.254034  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:37:14.291025  530956 cri.go:89] found id: ""
	I1212 00:37:14.291039  530956 logs.go:282] 0 containers: []
	W1212 00:37:14.291047  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:37:14.291057  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:37:14.291117  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:37:14.318045  530956 cri.go:89] found id: ""
	I1212 00:37:14.318059  530956 logs.go:282] 0 containers: []
	W1212 00:37:14.318066  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:37:14.318072  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:37:14.318133  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:37:14.345053  530956 cri.go:89] found id: ""
	I1212 00:37:14.345074  530956 logs.go:282] 0 containers: []
	W1212 00:37:14.345082  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:37:14.345087  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:37:14.345151  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:37:14.370315  530956 cri.go:89] found id: ""
	I1212 00:37:14.370328  530956 logs.go:282] 0 containers: []
	W1212 00:37:14.370335  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:37:14.370340  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:37:14.370397  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:37:14.400128  530956 cri.go:89] found id: ""
	I1212 00:37:14.400142  530956 logs.go:282] 0 containers: []
	W1212 00:37:14.400149  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:37:14.400156  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:37:14.400166  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:37:14.469510  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:37:14.469528  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:37:14.497946  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:37:14.497962  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:37:14.567259  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:37:14.567276  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:37:14.581753  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:37:14.581768  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:37:14.649334  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:37:14.641152   11304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:14.642071   11304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:14.643581   11304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:14.644076   11304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:14.645435   11304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:37:14.641152   11304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:14.642071   11304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:14.643581   11304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:14.644076   11304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:14.645435   11304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:37:17.151022  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:17.161375  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:37:17.161433  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:37:17.187128  530956 cri.go:89] found id: ""
	I1212 00:37:17.187144  530956 logs.go:282] 0 containers: []
	W1212 00:37:17.187151  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:37:17.187157  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:37:17.187224  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:37:17.212545  530956 cri.go:89] found id: ""
	I1212 00:37:17.212560  530956 logs.go:282] 0 containers: []
	W1212 00:37:17.212567  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:37:17.212573  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:37:17.212632  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:37:17.239817  530956 cri.go:89] found id: ""
	I1212 00:37:17.239831  530956 logs.go:282] 0 containers: []
	W1212 00:37:17.239838  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:37:17.239843  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:37:17.239900  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:37:17.267133  530956 cri.go:89] found id: ""
	I1212 00:37:17.267147  530956 logs.go:282] 0 containers: []
	W1212 00:37:17.267155  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:37:17.267160  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:37:17.267232  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:37:17.304534  530956 cri.go:89] found id: ""
	I1212 00:37:17.304548  530956 logs.go:282] 0 containers: []
	W1212 00:37:17.304554  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:37:17.304559  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:37:17.304618  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:37:17.330052  530956 cri.go:89] found id: ""
	I1212 00:37:17.330066  530956 logs.go:282] 0 containers: []
	W1212 00:37:17.330073  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:37:17.330078  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:37:17.330133  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:37:17.354652  530956 cri.go:89] found id: ""
	I1212 00:37:17.354671  530956 logs.go:282] 0 containers: []
	W1212 00:37:17.354678  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:37:17.354705  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:37:17.354715  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:37:17.421755  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:37:17.412804   11395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:17.413382   11395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:17.415079   11395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:17.415827   11395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:17.417552   11395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:37:17.412804   11395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:17.413382   11395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:17.415079   11395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:17.415827   11395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:17.417552   11395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:37:17.421766  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:37:17.421779  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:37:17.496810  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:37:17.496835  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:37:17.525867  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:37:17.525886  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:37:17.594454  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:37:17.594475  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:37:20.109774  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:20.119858  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:37:20.119916  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:37:20.148052  530956 cri.go:89] found id: ""
	I1212 00:37:20.148066  530956 logs.go:282] 0 containers: []
	W1212 00:37:20.148073  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:37:20.148078  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:37:20.148138  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:37:20.172308  530956 cri.go:89] found id: ""
	I1212 00:37:20.172322  530956 logs.go:282] 0 containers: []
	W1212 00:37:20.172329  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:37:20.172334  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:37:20.172392  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:37:20.200721  530956 cri.go:89] found id: ""
	I1212 00:37:20.200735  530956 logs.go:282] 0 containers: []
	W1212 00:37:20.200743  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:37:20.200748  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:37:20.200807  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:37:20.232123  530956 cri.go:89] found id: ""
	I1212 00:37:20.232136  530956 logs.go:282] 0 containers: []
	W1212 00:37:20.232143  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:37:20.232148  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:37:20.232207  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:37:20.263625  530956 cri.go:89] found id: ""
	I1212 00:37:20.263638  530956 logs.go:282] 0 containers: []
	W1212 00:37:20.263646  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:37:20.263651  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:37:20.263710  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:37:20.292234  530956 cri.go:89] found id: ""
	I1212 00:37:20.292248  530956 logs.go:282] 0 containers: []
	W1212 00:37:20.292255  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:37:20.292260  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:37:20.292319  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:37:20.316784  530956 cri.go:89] found id: ""
	I1212 00:37:20.316798  530956 logs.go:282] 0 containers: []
	W1212 00:37:20.316804  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:37:20.316812  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:37:20.316822  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:37:20.382530  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:37:20.382550  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:37:20.397572  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:37:20.397587  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:37:20.462516  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:37:20.453480   11508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:20.454137   11508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:20.455857   11508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:20.456349   11508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:20.458004   11508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:37:20.453480   11508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:20.454137   11508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:20.455857   11508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:20.456349   11508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:20.458004   11508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:37:20.462526  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:37:20.462536  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:37:20.536302  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:37:20.536323  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:37:23.067516  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:23.077747  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:37:23.077816  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:37:23.102753  530956 cri.go:89] found id: ""
	I1212 00:37:23.102767  530956 logs.go:282] 0 containers: []
	W1212 00:37:23.102774  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:37:23.102780  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:37:23.102845  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:37:23.128706  530956 cri.go:89] found id: ""
	I1212 00:37:23.128719  530956 logs.go:282] 0 containers: []
	W1212 00:37:23.128727  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:37:23.128732  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:37:23.128792  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:37:23.154481  530956 cri.go:89] found id: ""
	I1212 00:37:23.154495  530956 logs.go:282] 0 containers: []
	W1212 00:37:23.154502  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:37:23.154507  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:37:23.154572  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:37:23.179609  530956 cri.go:89] found id: ""
	I1212 00:37:23.179622  530956 logs.go:282] 0 containers: []
	W1212 00:37:23.179630  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:37:23.179635  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:37:23.179699  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:37:23.205151  530956 cri.go:89] found id: ""
	I1212 00:37:23.205165  530956 logs.go:282] 0 containers: []
	W1212 00:37:23.205172  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:37:23.205177  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:37:23.205238  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:37:23.242297  530956 cri.go:89] found id: ""
	I1212 00:37:23.242312  530956 logs.go:282] 0 containers: []
	W1212 00:37:23.242319  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:37:23.242324  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:37:23.242393  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:37:23.271432  530956 cri.go:89] found id: ""
	I1212 00:37:23.271446  530956 logs.go:282] 0 containers: []
	W1212 00:37:23.271453  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:37:23.271461  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:37:23.271472  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:37:23.339885  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:37:23.339904  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:37:23.355098  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:37:23.355115  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:37:23.419229  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:37:23.410980   11610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:23.411565   11610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:23.413072   11610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:23.413369   11610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:23.415026   11610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:37:23.410980   11610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:23.411565   11610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:23.413072   11610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:23.413369   11610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:23.415026   11610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:37:23.419240  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:37:23.419250  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:37:23.486458  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:37:23.486478  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:37:26.021866  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:26.032710  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:37:26.032772  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:37:26.058774  530956 cri.go:89] found id: ""
	I1212 00:37:26.058811  530956 logs.go:282] 0 containers: []
	W1212 00:37:26.058818  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:37:26.058824  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:37:26.058887  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:37:26.084731  530956 cri.go:89] found id: ""
	I1212 00:37:26.084746  530956 logs.go:282] 0 containers: []
	W1212 00:37:26.084753  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:37:26.084758  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:37:26.084821  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:37:26.110515  530956 cri.go:89] found id: ""
	I1212 00:37:26.110529  530956 logs.go:282] 0 containers: []
	W1212 00:37:26.110536  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:37:26.110541  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:37:26.110598  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:37:26.137082  530956 cri.go:89] found id: ""
	I1212 00:37:26.137095  530956 logs.go:282] 0 containers: []
	W1212 00:37:26.137103  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:37:26.137112  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:37:26.137172  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:37:26.162724  530956 cri.go:89] found id: ""
	I1212 00:37:26.162738  530956 logs.go:282] 0 containers: []
	W1212 00:37:26.162745  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:37:26.162751  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:37:26.162818  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:37:26.188538  530956 cri.go:89] found id: ""
	I1212 00:37:26.188559  530956 logs.go:282] 0 containers: []
	W1212 00:37:26.188566  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:37:26.188571  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:37:26.188630  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:37:26.219848  530956 cri.go:89] found id: ""
	I1212 00:37:26.219862  530956 logs.go:282] 0 containers: []
	W1212 00:37:26.219869  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:37:26.219876  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:37:26.219887  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:37:26.291444  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:37:26.291463  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:37:26.306938  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:37:26.306954  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:37:26.368571  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:37:26.360215   11716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:26.360983   11716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:26.362459   11716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:26.362990   11716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:26.364656   11716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:37:26.360215   11716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:26.360983   11716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:26.362459   11716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:26.362990   11716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:26.364656   11716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:37:26.368581  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:37:26.368593  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:37:26.436229  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:37:26.436247  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:37:28.966999  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:28.976928  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:37:28.976991  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:37:29.003108  530956 cri.go:89] found id: ""
	I1212 00:37:29.003123  530956 logs.go:282] 0 containers: []
	W1212 00:37:29.003130  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:37:29.003136  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:37:29.003212  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:37:29.028803  530956 cri.go:89] found id: ""
	I1212 00:37:29.028817  530956 logs.go:282] 0 containers: []
	W1212 00:37:29.028824  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:37:29.028828  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:37:29.028885  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:37:29.056738  530956 cri.go:89] found id: ""
	I1212 00:37:29.056758  530956 logs.go:282] 0 containers: []
	W1212 00:37:29.056765  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:37:29.056770  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:37:29.056828  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:37:29.081270  530956 cri.go:89] found id: ""
	I1212 00:37:29.081284  530956 logs.go:282] 0 containers: []
	W1212 00:37:29.081291  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:37:29.081297  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:37:29.081354  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:37:29.106545  530956 cri.go:89] found id: ""
	I1212 00:37:29.106559  530956 logs.go:282] 0 containers: []
	W1212 00:37:29.106566  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:37:29.106571  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:37:29.106629  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:37:29.133248  530956 cri.go:89] found id: ""
	I1212 00:37:29.133262  530956 logs.go:282] 0 containers: []
	W1212 00:37:29.133270  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:37:29.133275  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:37:29.133335  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:37:29.162606  530956 cri.go:89] found id: ""
	I1212 00:37:29.162620  530956 logs.go:282] 0 containers: []
	W1212 00:37:29.162627  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:37:29.162634  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:37:29.162645  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:37:29.228360  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:37:29.228380  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:37:29.244576  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:37:29.244593  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:37:29.318498  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:37:29.310629   11815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:29.311117   11815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:29.312690   11815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:29.313032   11815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:29.314613   11815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:37:29.310629   11815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:29.311117   11815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:29.312690   11815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:29.313032   11815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:29.314613   11815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:37:29.318508  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:37:29.318519  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:37:29.386989  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:37:29.387009  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:37:31.922335  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:31.932487  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:37:31.932555  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:37:31.958330  530956 cri.go:89] found id: ""
	I1212 00:37:31.958344  530956 logs.go:282] 0 containers: []
	W1212 00:37:31.958351  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:37:31.958356  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:37:31.958413  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:37:31.986166  530956 cri.go:89] found id: ""
	I1212 00:37:31.986184  530956 logs.go:282] 0 containers: []
	W1212 00:37:31.986193  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:37:31.986198  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:37:31.986263  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:37:32.018215  530956 cri.go:89] found id: ""
	I1212 00:37:32.018229  530956 logs.go:282] 0 containers: []
	W1212 00:37:32.018236  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:37:32.018241  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:37:32.018309  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:37:32.045496  530956 cri.go:89] found id: ""
	I1212 00:37:32.045510  530956 logs.go:282] 0 containers: []
	W1212 00:37:32.045526  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:37:32.045531  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:37:32.045599  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:37:32.071713  530956 cri.go:89] found id: ""
	I1212 00:37:32.071727  530956 logs.go:282] 0 containers: []
	W1212 00:37:32.071733  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:37:32.071748  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:37:32.071809  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:37:32.096398  530956 cri.go:89] found id: ""
	I1212 00:37:32.096412  530956 logs.go:282] 0 containers: []
	W1212 00:37:32.096419  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:37:32.096424  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:37:32.096481  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:37:32.121995  530956 cri.go:89] found id: ""
	I1212 00:37:32.122009  530956 logs.go:282] 0 containers: []
	W1212 00:37:32.122016  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:37:32.122024  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:37:32.122033  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:37:32.187537  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:37:32.187556  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:37:32.202073  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:37:32.202088  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:37:32.283678  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:37:32.269254   11915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:32.269661   11915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:32.271076   11915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:32.271701   11915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:32.275343   11915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:37:32.269254   11915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:32.269661   11915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:32.271076   11915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:32.271701   11915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:32.275343   11915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:37:32.283688  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:37:32.283699  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:37:32.352426  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:37:32.352446  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:37:34.887315  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:34.897374  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:37:34.897440  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:37:34.922626  530956 cri.go:89] found id: ""
	I1212 00:37:34.922641  530956 logs.go:282] 0 containers: []
	W1212 00:37:34.922648  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:37:34.922654  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:37:34.922741  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:37:34.948176  530956 cri.go:89] found id: ""
	I1212 00:37:34.948190  530956 logs.go:282] 0 containers: []
	W1212 00:37:34.948199  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:37:34.948204  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:37:34.948302  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:37:34.975855  530956 cri.go:89] found id: ""
	I1212 00:37:34.975869  530956 logs.go:282] 0 containers: []
	W1212 00:37:34.975883  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:37:34.975889  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:37:34.975954  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:37:35.008030  530956 cri.go:89] found id: ""
	I1212 00:37:35.008046  530956 logs.go:282] 0 containers: []
	W1212 00:37:35.008054  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:37:35.008060  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:37:35.008144  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:37:35.033803  530956 cri.go:89] found id: ""
	I1212 00:37:35.033816  530956 logs.go:282] 0 containers: []
	W1212 00:37:35.033823  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:37:35.033828  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:37:35.033887  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:37:35.059521  530956 cri.go:89] found id: ""
	I1212 00:37:35.059535  530956 logs.go:282] 0 containers: []
	W1212 00:37:35.059542  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:37:35.059547  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:37:35.059604  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:37:35.084378  530956 cri.go:89] found id: ""
	I1212 00:37:35.084392  530956 logs.go:282] 0 containers: []
	W1212 00:37:35.084399  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:37:35.084406  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:37:35.084416  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:37:35.150144  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:37:35.150166  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:37:35.164295  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:37:35.164311  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:37:35.237720  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:37:35.229555   12019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:35.230202   12019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:35.231874   12019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:35.232277   12019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:35.233798   12019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:37:35.229555   12019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:35.230202   12019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:35.231874   12019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:35.232277   12019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:35.233798   12019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:37:35.237730  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:37:35.237740  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:37:35.309700  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:37:35.309721  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:37:37.842191  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:37.852127  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:37:37.852198  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:37:37.883852  530956 cri.go:89] found id: ""
	I1212 00:37:37.883866  530956 logs.go:282] 0 containers: []
	W1212 00:37:37.883873  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:37:37.883879  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:37:37.883940  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:37:37.908974  530956 cri.go:89] found id: ""
	I1212 00:37:37.908988  530956 logs.go:282] 0 containers: []
	W1212 00:37:37.908995  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:37:37.909000  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:37:37.909058  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:37:37.934558  530956 cri.go:89] found id: ""
	I1212 00:37:37.934581  530956 logs.go:282] 0 containers: []
	W1212 00:37:37.934588  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:37:37.934593  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:37:37.934659  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:37:37.960620  530956 cri.go:89] found id: ""
	I1212 00:37:37.960634  530956 logs.go:282] 0 containers: []
	W1212 00:37:37.960641  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:37:37.960653  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:37:37.960716  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:37:37.985545  530956 cri.go:89] found id: ""
	I1212 00:37:37.985559  530956 logs.go:282] 0 containers: []
	W1212 00:37:37.985566  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:37:37.985571  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:37:37.985649  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:37:38.019481  530956 cri.go:89] found id: ""
	I1212 00:37:38.019496  530956 logs.go:282] 0 containers: []
	W1212 00:37:38.019511  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:37:38.019517  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:37:38.019587  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:37:38.050591  530956 cri.go:89] found id: ""
	I1212 00:37:38.050606  530956 logs.go:282] 0 containers: []
	W1212 00:37:38.050613  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:37:38.050621  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:37:38.050631  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:37:38.118052  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:37:38.118073  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:37:38.133136  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:37:38.133152  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:37:38.195824  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:37:38.187464   12120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:38.188136   12120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:38.189863   12120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:38.190376   12120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:38.191908   12120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:37:38.187464   12120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:38.188136   12120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:38.189863   12120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:38.190376   12120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:38.191908   12120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:37:38.195836  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:37:38.195847  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:37:38.277789  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:37:38.277816  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:37:40.807649  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:40.817759  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:37:40.817820  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:37:40.843061  530956 cri.go:89] found id: ""
	I1212 00:37:40.843075  530956 logs.go:282] 0 containers: []
	W1212 00:37:40.843082  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:37:40.843087  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:37:40.843147  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:37:40.867922  530956 cri.go:89] found id: ""
	I1212 00:37:40.867936  530956 logs.go:282] 0 containers: []
	W1212 00:37:40.867944  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:37:40.867949  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:37:40.868005  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:37:40.892630  530956 cri.go:89] found id: ""
	I1212 00:37:40.892644  530956 logs.go:282] 0 containers: []
	W1212 00:37:40.892653  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:37:40.892657  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:37:40.892716  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:37:40.918166  530956 cri.go:89] found id: ""
	I1212 00:37:40.918180  530956 logs.go:282] 0 containers: []
	W1212 00:37:40.918187  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:37:40.918192  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:37:40.918250  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:37:40.944075  530956 cri.go:89] found id: ""
	I1212 00:37:40.944088  530956 logs.go:282] 0 containers: []
	W1212 00:37:40.944095  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:37:40.944100  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:37:40.944160  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:37:40.969320  530956 cri.go:89] found id: ""
	I1212 00:37:40.969333  530956 logs.go:282] 0 containers: []
	W1212 00:37:40.969340  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:37:40.969346  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:37:40.969405  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:37:40.997473  530956 cri.go:89] found id: ""
	I1212 00:37:40.997487  530956 logs.go:282] 0 containers: []
	W1212 00:37:40.997494  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:37:40.997501  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:37:40.997512  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:37:41.028728  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:37:41.028743  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:37:41.095087  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:37:41.095107  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:37:41.109485  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:37:41.109501  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:37:41.176844  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:37:41.166571   12234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:41.167470   12234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:41.170826   12234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:41.171336   12234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:41.172874   12234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:37:41.166571   12234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:41.167470   12234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:41.170826   12234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:41.171336   12234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:41.172874   12234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:37:41.176853  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:37:41.176864  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:37:43.749966  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:43.760058  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:37:43.760118  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:37:43.785533  530956 cri.go:89] found id: ""
	I1212 00:37:43.785546  530956 logs.go:282] 0 containers: []
	W1212 00:37:43.785554  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:37:43.785559  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:37:43.785616  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:37:43.812938  530956 cri.go:89] found id: ""
	I1212 00:37:43.812952  530956 logs.go:282] 0 containers: []
	W1212 00:37:43.812960  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:37:43.812964  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:37:43.813029  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:37:43.838583  530956 cri.go:89] found id: ""
	I1212 00:37:43.838596  530956 logs.go:282] 0 containers: []
	W1212 00:37:43.838604  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:37:43.838609  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:37:43.838669  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:37:43.864548  530956 cri.go:89] found id: ""
	I1212 00:37:43.864562  530956 logs.go:282] 0 containers: []
	W1212 00:37:43.864569  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:37:43.864574  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:37:43.864633  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:37:43.889391  530956 cri.go:89] found id: ""
	I1212 00:37:43.889405  530956 logs.go:282] 0 containers: []
	W1212 00:37:43.889412  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:37:43.889417  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:37:43.889478  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:37:43.914183  530956 cri.go:89] found id: ""
	I1212 00:37:43.914196  530956 logs.go:282] 0 containers: []
	W1212 00:37:43.914203  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:37:43.914209  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:37:43.914268  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:37:43.941097  530956 cri.go:89] found id: ""
	I1212 00:37:43.941112  530956 logs.go:282] 0 containers: []
	W1212 00:37:43.941119  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:37:43.941126  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:37:43.941136  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:37:44.007607  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:37:44.007625  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:37:44.022976  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:37:44.022993  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:37:44.087167  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:37:44.078521   12330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:44.079166   12330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:44.080973   12330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:44.081418   12330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:44.083213   12330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:37:44.078521   12330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:44.079166   12330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:44.080973   12330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:44.081418   12330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:44.083213   12330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:37:44.087177  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:37:44.087190  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:37:44.156045  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:37:44.156065  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:37:46.684537  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:46.694320  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:37:46.694383  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:37:46.718727  530956 cri.go:89] found id: ""
	I1212 00:37:46.718741  530956 logs.go:282] 0 containers: []
	W1212 00:37:46.718751  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:37:46.718756  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:37:46.718832  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:37:46.744753  530956 cri.go:89] found id: ""
	I1212 00:37:46.744767  530956 logs.go:282] 0 containers: []
	W1212 00:37:46.744774  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:37:46.744779  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:37:46.744838  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:37:46.773525  530956 cri.go:89] found id: ""
	I1212 00:37:46.773538  530956 logs.go:282] 0 containers: []
	W1212 00:37:46.773546  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:37:46.773551  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:37:46.773608  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:37:46.798518  530956 cri.go:89] found id: ""
	I1212 00:37:46.798532  530956 logs.go:282] 0 containers: []
	W1212 00:37:46.798539  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:37:46.798544  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:37:46.798602  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:37:46.822867  530956 cri.go:89] found id: ""
	I1212 00:37:46.822880  530956 logs.go:282] 0 containers: []
	W1212 00:37:46.822887  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:37:46.822893  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:37:46.822949  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:37:46.849825  530956 cri.go:89] found id: ""
	I1212 00:37:46.849839  530956 logs.go:282] 0 containers: []
	W1212 00:37:46.849846  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:37:46.849851  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:37:46.849909  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:37:46.874986  530956 cri.go:89] found id: ""
	I1212 00:37:46.874999  530956 logs.go:282] 0 containers: []
	W1212 00:37:46.875011  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:37:46.875019  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:37:46.875030  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:37:46.939887  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:37:46.931753   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:46.932307   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:46.933826   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:46.934346   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:46.936019   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:37:46.931753   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:46.932307   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:46.933826   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:46.934346   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:46.936019   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:37:46.939896  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:37:46.939909  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:37:47.008024  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:37:47.008044  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:37:47.036373  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:37:47.036388  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:37:47.101329  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:37:47.101347  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:37:49.616038  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:49.626178  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:37:49.626240  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:37:49.652682  530956 cri.go:89] found id: ""
	I1212 00:37:49.652696  530956 logs.go:282] 0 containers: []
	W1212 00:37:49.652703  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:37:49.652708  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:37:49.652766  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:37:49.679170  530956 cri.go:89] found id: ""
	I1212 00:37:49.679185  530956 logs.go:282] 0 containers: []
	W1212 00:37:49.679191  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:37:49.679197  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:37:49.679256  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:37:49.706504  530956 cri.go:89] found id: ""
	I1212 00:37:49.706518  530956 logs.go:282] 0 containers: []
	W1212 00:37:49.706526  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:37:49.706532  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:37:49.706592  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:37:49.732201  530956 cri.go:89] found id: ""
	I1212 00:37:49.732215  530956 logs.go:282] 0 containers: []
	W1212 00:37:49.732222  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:37:49.732227  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:37:49.732287  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:37:49.757094  530956 cri.go:89] found id: ""
	I1212 00:37:49.757107  530956 logs.go:282] 0 containers: []
	W1212 00:37:49.757115  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:37:49.757119  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:37:49.757178  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:37:49.785367  530956 cri.go:89] found id: ""
	I1212 00:37:49.785382  530956 logs.go:282] 0 containers: []
	W1212 00:37:49.785391  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:37:49.785396  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:37:49.785466  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:37:49.809132  530956 cri.go:89] found id: ""
	I1212 00:37:49.809145  530956 logs.go:282] 0 containers: []
	W1212 00:37:49.809152  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:37:49.809160  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:37:49.809171  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:37:49.874272  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:37:49.874291  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:37:49.888851  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:37:49.888866  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:37:49.954139  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:37:49.945852   12539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:49.946386   12539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:49.948180   12539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:49.948551   12539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:49.950144   12539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:37:49.945852   12539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:49.946386   12539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:49.948180   12539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:49.948551   12539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:49.950144   12539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:37:49.954152  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:37:49.954164  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:37:50.021343  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:37:50.021364  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:37:52.550858  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:52.560788  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:37:52.560857  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:37:52.589542  530956 cri.go:89] found id: ""
	I1212 00:37:52.589556  530956 logs.go:282] 0 containers: []
	W1212 00:37:52.589563  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:37:52.589568  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:37:52.589629  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:37:52.613111  530956 cri.go:89] found id: ""
	I1212 00:37:52.613124  530956 logs.go:282] 0 containers: []
	W1212 00:37:52.613131  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:37:52.613136  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:37:52.613195  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:37:52.637059  530956 cri.go:89] found id: ""
	I1212 00:37:52.637072  530956 logs.go:282] 0 containers: []
	W1212 00:37:52.637079  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:37:52.637084  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:37:52.637142  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:37:52.661402  530956 cri.go:89] found id: ""
	I1212 00:37:52.661415  530956 logs.go:282] 0 containers: []
	W1212 00:37:52.661422  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:37:52.661428  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:37:52.661485  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:37:52.686208  530956 cri.go:89] found id: ""
	I1212 00:37:52.686221  530956 logs.go:282] 0 containers: []
	W1212 00:37:52.686228  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:37:52.686234  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:37:52.686292  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:37:52.714239  530956 cri.go:89] found id: ""
	I1212 00:37:52.714257  530956 logs.go:282] 0 containers: []
	W1212 00:37:52.714272  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:37:52.714281  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:37:52.714360  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:37:52.738849  530956 cri.go:89] found id: ""
	I1212 00:37:52.738862  530956 logs.go:282] 0 containers: []
	W1212 00:37:52.738871  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:37:52.738878  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:37:52.738889  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:37:52.805309  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:37:52.796653   12638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:52.797158   12638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:52.798767   12638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:52.799405   12638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:52.800977   12638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:37:52.796653   12638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:52.797158   12638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:52.798767   12638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:52.799405   12638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:52.800977   12638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:37:52.805318  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:37:52.805329  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:37:52.873118  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:37:52.873138  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:37:52.901072  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:37:52.901088  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:37:52.967085  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:37:52.967104  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:37:55.482800  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:55.493703  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:37:55.493761  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:37:55.527575  530956 cri.go:89] found id: ""
	I1212 00:37:55.527588  530956 logs.go:282] 0 containers: []
	W1212 00:37:55.527595  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:37:55.527601  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:37:55.527663  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:37:55.552177  530956 cri.go:89] found id: ""
	I1212 00:37:55.552191  530956 logs.go:282] 0 containers: []
	W1212 00:37:55.552198  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:37:55.552203  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:37:55.552264  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:37:55.576968  530956 cri.go:89] found id: ""
	I1212 00:37:55.576981  530956 logs.go:282] 0 containers: []
	W1212 00:37:55.576988  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:37:55.576993  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:37:55.577054  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:37:55.603212  530956 cri.go:89] found id: ""
	I1212 00:37:55.603225  530956 logs.go:282] 0 containers: []
	W1212 00:37:55.603232  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:37:55.603237  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:37:55.603300  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:37:55.629922  530956 cri.go:89] found id: ""
	I1212 00:37:55.629936  530956 logs.go:282] 0 containers: []
	W1212 00:37:55.629943  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:37:55.629949  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:37:55.630009  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:37:55.659450  530956 cri.go:89] found id: ""
	I1212 00:37:55.659469  530956 logs.go:282] 0 containers: []
	W1212 00:37:55.659476  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:37:55.659482  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:37:55.659540  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:37:55.683953  530956 cri.go:89] found id: ""
	I1212 00:37:55.683967  530956 logs.go:282] 0 containers: []
	W1212 00:37:55.683974  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:37:55.683981  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:37:55.683991  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:37:55.752000  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:37:55.752019  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:37:55.781847  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:37:55.781863  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:37:55.846599  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:37:55.846617  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:37:55.861470  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:37:55.861487  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:37:55.927422  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:37:55.918837   12763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:55.919637   12763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:55.921344   12763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:55.921624   12763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:55.923175   12763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:37:55.918837   12763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:55.919637   12763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:55.921344   12763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:55.921624   12763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:55.923175   12763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:37:58.429107  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:58.438890  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:37:58.438951  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:37:58.463332  530956 cri.go:89] found id: ""
	I1212 00:37:58.463346  530956 logs.go:282] 0 containers: []
	W1212 00:37:58.463353  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:37:58.463358  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:37:58.463420  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:37:58.502844  530956 cri.go:89] found id: ""
	I1212 00:37:58.502859  530956 logs.go:282] 0 containers: []
	W1212 00:37:58.502866  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:37:58.502871  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:37:58.502934  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:37:58.535191  530956 cri.go:89] found id: ""
	I1212 00:37:58.535204  530956 logs.go:282] 0 containers: []
	W1212 00:37:58.535211  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:37:58.535216  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:37:58.535275  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:37:58.560276  530956 cri.go:89] found id: ""
	I1212 00:37:58.560290  530956 logs.go:282] 0 containers: []
	W1212 00:37:58.560296  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:37:58.560302  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:37:58.560360  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:37:58.585008  530956 cri.go:89] found id: ""
	I1212 00:37:58.585022  530956 logs.go:282] 0 containers: []
	W1212 00:37:58.585029  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:37:58.585034  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:37:58.585092  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:37:58.610668  530956 cri.go:89] found id: ""
	I1212 00:37:58.610704  530956 logs.go:282] 0 containers: []
	W1212 00:37:58.610712  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:37:58.610717  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:37:58.610791  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:37:58.633946  530956 cri.go:89] found id: ""
	I1212 00:37:58.633960  530956 logs.go:282] 0 containers: []
	W1212 00:37:58.633967  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:37:58.633974  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:37:58.633984  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:37:58.702859  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:37:58.702878  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:37:58.730459  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:37:58.730475  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:37:58.799001  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:37:58.799020  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:37:58.813707  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:37:58.813724  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:37:58.880292  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:37:58.871863   12866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:58.872482   12866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:58.874082   12866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:58.874638   12866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:58.876520   12866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:37:58.871863   12866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:58.872482   12866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:58.874082   12866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:58.874638   12866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:58.876520   12866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:38:01.380529  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:38:01.390377  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:38:01.390440  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:38:01.414742  530956 cri.go:89] found id: ""
	I1212 00:38:01.414755  530956 logs.go:282] 0 containers: []
	W1212 00:38:01.414763  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:38:01.414769  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:38:01.414848  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:38:01.440014  530956 cri.go:89] found id: ""
	I1212 00:38:01.440028  530956 logs.go:282] 0 containers: []
	W1212 00:38:01.440035  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:38:01.440040  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:38:01.440100  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:38:01.469919  530956 cri.go:89] found id: ""
	I1212 00:38:01.469947  530956 logs.go:282] 0 containers: []
	W1212 00:38:01.469955  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:38:01.469963  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:38:01.470025  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:38:01.502102  530956 cri.go:89] found id: ""
	I1212 00:38:01.502116  530956 logs.go:282] 0 containers: []
	W1212 00:38:01.502123  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:38:01.502128  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:38:01.502185  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:38:01.550477  530956 cri.go:89] found id: ""
	I1212 00:38:01.550497  530956 logs.go:282] 0 containers: []
	W1212 00:38:01.550504  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:38:01.550509  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:38:01.550572  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:38:01.575848  530956 cri.go:89] found id: ""
	I1212 00:38:01.575861  530956 logs.go:282] 0 containers: []
	W1212 00:38:01.575868  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:38:01.575874  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:38:01.575933  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:38:01.601329  530956 cri.go:89] found id: ""
	I1212 00:38:01.601342  530956 logs.go:282] 0 containers: []
	W1212 00:38:01.601350  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:38:01.601358  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:38:01.601369  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:38:01.617336  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:38:01.617351  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:38:01.681650  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:38:01.672976   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:01.673795   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:01.675461   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:01.675793   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:01.677308   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:38:01.672976   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:01.673795   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:01.675461   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:01.675793   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:01.677308   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:38:01.681659  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:38:01.681669  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:38:01.753959  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:38:01.753987  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:38:01.784884  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:38:01.784901  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:38:04.352224  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:38:04.362582  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:38:04.362651  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:38:04.387422  530956 cri.go:89] found id: ""
	I1212 00:38:04.387436  530956 logs.go:282] 0 containers: []
	W1212 00:38:04.387443  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:38:04.387448  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:38:04.387515  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:38:04.416278  530956 cri.go:89] found id: ""
	I1212 00:38:04.416292  530956 logs.go:282] 0 containers: []
	W1212 00:38:04.416298  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:38:04.416304  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:38:04.416360  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:38:04.445370  530956 cri.go:89] found id: ""
	I1212 00:38:04.445384  530956 logs.go:282] 0 containers: []
	W1212 00:38:04.445391  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:38:04.445397  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:38:04.445455  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:38:04.482755  530956 cri.go:89] found id: ""
	I1212 00:38:04.482768  530956 logs.go:282] 0 containers: []
	W1212 00:38:04.482783  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:38:04.482789  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:38:04.482857  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:38:04.509091  530956 cri.go:89] found id: ""
	I1212 00:38:04.509105  530956 logs.go:282] 0 containers: []
	W1212 00:38:04.509120  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:38:04.509126  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:38:04.509194  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:38:04.539958  530956 cri.go:89] found id: ""
	I1212 00:38:04.539980  530956 logs.go:282] 0 containers: []
	W1212 00:38:04.539987  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:38:04.539995  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:38:04.540053  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:38:04.565072  530956 cri.go:89] found id: ""
	I1212 00:38:04.565085  530956 logs.go:282] 0 containers: []
	W1212 00:38:04.565092  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:38:04.565100  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:38:04.565110  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:38:04.632823  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:38:04.632844  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:38:04.659747  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:38:04.659763  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:38:04.726963  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:38:04.726980  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:38:04.742446  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:38:04.742462  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:38:04.811712  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:38:04.802381   13076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:04.803275   13076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:04.805003   13076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:04.805618   13076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:04.807784   13076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:38:04.802381   13076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:04.803275   13076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:04.805003   13076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:04.805618   13076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:04.807784   13076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:38:07.313373  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:38:07.323395  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:38:07.323461  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:38:07.349092  530956 cri.go:89] found id: ""
	I1212 00:38:07.349106  530956 logs.go:282] 0 containers: []
	W1212 00:38:07.349114  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:38:07.349119  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:38:07.349178  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:38:07.374733  530956 cri.go:89] found id: ""
	I1212 00:38:07.374747  530956 logs.go:282] 0 containers: []
	W1212 00:38:07.374754  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:38:07.374759  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:38:07.374826  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:38:07.399425  530956 cri.go:89] found id: ""
	I1212 00:38:07.399439  530956 logs.go:282] 0 containers: []
	W1212 00:38:07.399446  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:38:07.399450  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:38:07.399509  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:38:07.423784  530956 cri.go:89] found id: ""
	I1212 00:38:07.423798  530956 logs.go:282] 0 containers: []
	W1212 00:38:07.423805  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:38:07.423809  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:38:07.423866  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:38:07.449601  530956 cri.go:89] found id: ""
	I1212 00:38:07.449615  530956 logs.go:282] 0 containers: []
	W1212 00:38:07.449622  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:38:07.449627  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:38:07.449687  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:38:07.483778  530956 cri.go:89] found id: ""
	I1212 00:38:07.483793  530956 logs.go:282] 0 containers: []
	W1212 00:38:07.483800  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:38:07.483805  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:38:07.483863  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:38:07.514105  530956 cri.go:89] found id: ""
	I1212 00:38:07.514118  530956 logs.go:282] 0 containers: []
	W1212 00:38:07.514126  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:38:07.514135  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:38:07.514144  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:38:07.584461  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:38:07.584483  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:38:07.599076  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:38:07.599092  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:38:07.662502  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:38:07.654255   13169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:07.655086   13169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:07.656662   13169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:07.656958   13169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:07.658502   13169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:38:07.654255   13169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:07.655086   13169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:07.656662   13169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:07.656958   13169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:07.658502   13169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:38:07.662512  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:38:07.662524  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:38:07.730514  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:38:07.730532  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:38:10.261580  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:38:10.271806  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:38:10.271866  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:38:10.301488  530956 cri.go:89] found id: ""
	I1212 00:38:10.301509  530956 logs.go:282] 0 containers: []
	W1212 00:38:10.301517  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:38:10.301522  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:38:10.301586  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:38:10.328569  530956 cri.go:89] found id: ""
	I1212 00:38:10.328582  530956 logs.go:282] 0 containers: []
	W1212 00:38:10.328589  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:38:10.328594  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:38:10.328651  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:38:10.352390  530956 cri.go:89] found id: ""
	I1212 00:38:10.352404  530956 logs.go:282] 0 containers: []
	W1212 00:38:10.352411  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:38:10.352416  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:38:10.352476  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:38:10.376595  530956 cri.go:89] found id: ""
	I1212 00:38:10.376608  530956 logs.go:282] 0 containers: []
	W1212 00:38:10.376615  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:38:10.376620  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:38:10.376676  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:38:10.401114  530956 cri.go:89] found id: ""
	I1212 00:38:10.401129  530956 logs.go:282] 0 containers: []
	W1212 00:38:10.401136  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:38:10.401141  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:38:10.401202  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:38:10.426633  530956 cri.go:89] found id: ""
	I1212 00:38:10.426647  530956 logs.go:282] 0 containers: []
	W1212 00:38:10.426654  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:38:10.426659  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:38:10.426740  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:38:10.452233  530956 cri.go:89] found id: ""
	I1212 00:38:10.452246  530956 logs.go:282] 0 containers: []
	W1212 00:38:10.452254  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:38:10.452262  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:38:10.452272  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:38:10.521036  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:38:10.521055  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:38:10.535759  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:38:10.535774  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:38:10.601793  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:38:10.593515   13275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:10.594074   13275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:10.595582   13275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:10.596077   13275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:10.597523   13275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:38:10.593515   13275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:10.594074   13275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:10.595582   13275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:10.596077   13275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:10.597523   13275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:38:10.601803  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:38:10.601813  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:38:10.672541  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:38:10.672560  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:38:13.203975  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:38:13.213736  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:38:13.213796  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:38:13.238219  530956 cri.go:89] found id: ""
	I1212 00:38:13.238234  530956 logs.go:282] 0 containers: []
	W1212 00:38:13.238241  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:38:13.238246  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:38:13.238303  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:38:13.262428  530956 cri.go:89] found id: ""
	I1212 00:38:13.262441  530956 logs.go:282] 0 containers: []
	W1212 00:38:13.262449  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:38:13.262454  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:38:13.262518  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:38:13.287118  530956 cri.go:89] found id: ""
	I1212 00:38:13.287132  530956 logs.go:282] 0 containers: []
	W1212 00:38:13.287139  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:38:13.287144  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:38:13.287201  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:38:13.316471  530956 cri.go:89] found id: ""
	I1212 00:38:13.316485  530956 logs.go:282] 0 containers: []
	W1212 00:38:13.316492  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:38:13.316497  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:38:13.316554  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:38:13.340630  530956 cri.go:89] found id: ""
	I1212 00:38:13.340644  530956 logs.go:282] 0 containers: []
	W1212 00:38:13.340651  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:38:13.340656  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:38:13.340719  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:38:13.365167  530956 cri.go:89] found id: ""
	I1212 00:38:13.365180  530956 logs.go:282] 0 containers: []
	W1212 00:38:13.365187  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:38:13.365192  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:38:13.365249  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:38:13.393786  530956 cri.go:89] found id: ""
	I1212 00:38:13.393800  530956 logs.go:282] 0 containers: []
	W1212 00:38:13.393806  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:38:13.393813  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:38:13.393824  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:38:13.460497  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:38:13.460517  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:38:13.484321  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:38:13.484350  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:38:13.564959  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:38:13.556484   13380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:13.557156   13380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:13.558914   13380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:13.559521   13380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:13.561122   13380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:38:13.556484   13380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:13.557156   13380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:13.558914   13380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:13.559521   13380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:13.561122   13380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:38:13.564970  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:38:13.564991  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:38:13.633622  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:38:13.633641  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:38:16.165859  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:38:16.179076  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:38:16.179137  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:38:16.204832  530956 cri.go:89] found id: ""
	I1212 00:38:16.204846  530956 logs.go:282] 0 containers: []
	W1212 00:38:16.204853  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:38:16.204858  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:38:16.204929  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:38:16.230899  530956 cri.go:89] found id: ""
	I1212 00:38:16.230912  530956 logs.go:282] 0 containers: []
	W1212 00:38:16.230920  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:38:16.230924  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:38:16.230985  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:38:16.260492  530956 cri.go:89] found id: ""
	I1212 00:38:16.260505  530956 logs.go:282] 0 containers: []
	W1212 00:38:16.260513  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:38:16.260518  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:38:16.260582  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:38:16.285639  530956 cri.go:89] found id: ""
	I1212 00:38:16.285652  530956 logs.go:282] 0 containers: []
	W1212 00:38:16.285660  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:38:16.285665  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:38:16.285724  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:38:16.311240  530956 cri.go:89] found id: ""
	I1212 00:38:16.311253  530956 logs.go:282] 0 containers: []
	W1212 00:38:16.311261  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:38:16.311266  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:38:16.311331  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:38:16.337039  530956 cri.go:89] found id: ""
	I1212 00:38:16.337053  530956 logs.go:282] 0 containers: []
	W1212 00:38:16.337060  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:38:16.337065  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:38:16.337132  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:38:16.363033  530956 cri.go:89] found id: ""
	I1212 00:38:16.363047  530956 logs.go:282] 0 containers: []
	W1212 00:38:16.363053  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:38:16.363061  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:38:16.363072  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:38:16.393154  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:38:16.393171  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:38:16.460499  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:38:16.460516  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:38:16.475666  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:38:16.475681  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:38:16.550358  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:38:16.542066   13498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:16.542789   13498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:16.544568   13498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:16.545193   13498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:16.546806   13498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:38:16.542066   13498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:16.542789   13498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:16.544568   13498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:16.545193   13498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:16.546806   13498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:38:16.550367  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:38:16.550378  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:38:19.117450  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:38:19.129437  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:38:19.129500  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:38:19.153970  530956 cri.go:89] found id: ""
	I1212 00:38:19.153983  530956 logs.go:282] 0 containers: []
	W1212 00:38:19.153990  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:38:19.153995  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:38:19.154052  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:38:19.179294  530956 cri.go:89] found id: ""
	I1212 00:38:19.179307  530956 logs.go:282] 0 containers: []
	W1212 00:38:19.179314  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:38:19.179319  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:38:19.179381  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:38:19.205071  530956 cri.go:89] found id: ""
	I1212 00:38:19.205091  530956 logs.go:282] 0 containers: []
	W1212 00:38:19.205098  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:38:19.205103  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:38:19.205168  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:38:19.230084  530956 cri.go:89] found id: ""
	I1212 00:38:19.230098  530956 logs.go:282] 0 containers: []
	W1212 00:38:19.230111  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:38:19.230118  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:38:19.230181  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:38:19.255464  530956 cri.go:89] found id: ""
	I1212 00:38:19.255477  530956 logs.go:282] 0 containers: []
	W1212 00:38:19.255485  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:38:19.255490  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:38:19.255549  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:38:19.285389  530956 cri.go:89] found id: ""
	I1212 00:38:19.285402  530956 logs.go:282] 0 containers: []
	W1212 00:38:19.285409  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:38:19.285415  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:38:19.285472  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:38:19.312947  530956 cri.go:89] found id: ""
	I1212 00:38:19.312960  530956 logs.go:282] 0 containers: []
	W1212 00:38:19.312967  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:38:19.312975  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:38:19.312985  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:38:19.350894  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:38:19.350911  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:38:19.417923  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:38:19.417945  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:38:19.432429  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:38:19.432445  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:38:19.505932  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:38:19.498121   13597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:19.498890   13597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:19.500411   13597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:19.500702   13597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:19.502118   13597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:38:19.498121   13597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:19.498890   13597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:19.500411   13597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:19.500702   13597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:19.502118   13597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:38:19.505942  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:38:19.505964  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:38:22.083196  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:38:22.093637  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:38:22.093699  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:38:22.118550  530956 cri.go:89] found id: ""
	I1212 00:38:22.118565  530956 logs.go:282] 0 containers: []
	W1212 00:38:22.118572  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:38:22.118578  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:38:22.118636  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:38:22.145134  530956 cri.go:89] found id: ""
	I1212 00:38:22.145147  530956 logs.go:282] 0 containers: []
	W1212 00:38:22.145155  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:38:22.145159  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:38:22.145217  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:38:22.170293  530956 cri.go:89] found id: ""
	I1212 00:38:22.170306  530956 logs.go:282] 0 containers: []
	W1212 00:38:22.170313  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:38:22.170318  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:38:22.170386  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:38:22.197536  530956 cri.go:89] found id: ""
	I1212 00:38:22.197550  530956 logs.go:282] 0 containers: []
	W1212 00:38:22.197571  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:38:22.197576  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:38:22.197642  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:38:22.222476  530956 cri.go:89] found id: ""
	I1212 00:38:22.222490  530956 logs.go:282] 0 containers: []
	W1212 00:38:22.222497  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:38:22.222502  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:38:22.222560  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:38:22.247759  530956 cri.go:89] found id: ""
	I1212 00:38:22.247779  530956 logs.go:282] 0 containers: []
	W1212 00:38:22.247792  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:38:22.247797  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:38:22.247865  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:38:22.278000  530956 cri.go:89] found id: ""
	I1212 00:38:22.278022  530956 logs.go:282] 0 containers: []
	W1212 00:38:22.278030  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:38:22.278037  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:38:22.278047  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:38:22.306112  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:38:22.306127  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:38:22.377647  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:38:22.377675  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:38:22.394490  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:38:22.394506  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:38:22.462988  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:38:22.454404   13702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:22.455058   13702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:22.456732   13702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:22.457164   13702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:22.458875   13702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:38:22.454404   13702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:22.455058   13702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:22.456732   13702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:22.457164   13702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:22.458875   13702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:38:22.462999  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:38:22.463010  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:38:25.044675  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:38:25.054532  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:38:25.054592  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:38:25.080041  530956 cri.go:89] found id: ""
	I1212 00:38:25.080055  530956 logs.go:282] 0 containers: []
	W1212 00:38:25.080062  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:38:25.080068  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:38:25.080129  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:38:25.105941  530956 cri.go:89] found id: ""
	I1212 00:38:25.105957  530956 logs.go:282] 0 containers: []
	W1212 00:38:25.105965  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:38:25.105971  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:38:25.106038  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:38:25.136063  530956 cri.go:89] found id: ""
	I1212 00:38:25.136078  530956 logs.go:282] 0 containers: []
	W1212 00:38:25.136086  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:38:25.136096  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:38:25.136159  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:38:25.161125  530956 cri.go:89] found id: ""
	I1212 00:38:25.161140  530956 logs.go:282] 0 containers: []
	W1212 00:38:25.161147  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:38:25.161153  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:38:25.161212  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:38:25.187318  530956 cri.go:89] found id: ""
	I1212 00:38:25.187333  530956 logs.go:282] 0 containers: []
	W1212 00:38:25.187340  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:38:25.187345  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:38:25.187407  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:38:25.213505  530956 cri.go:89] found id: ""
	I1212 00:38:25.213519  530956 logs.go:282] 0 containers: []
	W1212 00:38:25.213528  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:38:25.213533  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:38:25.213593  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:38:25.238804  530956 cri.go:89] found id: ""
	I1212 00:38:25.238818  530956 logs.go:282] 0 containers: []
	W1212 00:38:25.238825  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:38:25.238833  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:38:25.238845  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:38:25.253570  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:38:25.253586  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:38:25.319774  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:38:25.310440   13795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:25.311167   13795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:25.312774   13795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:25.313270   13795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:25.315248   13795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:38:25.310440   13795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:25.311167   13795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:25.312774   13795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:25.313270   13795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:25.315248   13795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:38:25.319800  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:38:25.319811  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:38:25.392356  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:38:25.392375  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:38:25.422668  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:38:25.422706  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:38:27.990024  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:38:28.003363  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:38:28.003444  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:38:28.033003  530956 cri.go:89] found id: ""
	I1212 00:38:28.033017  530956 logs.go:282] 0 containers: []
	W1212 00:38:28.033024  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:38:28.033029  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:38:28.033090  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:38:28.059854  530956 cri.go:89] found id: ""
	I1212 00:38:28.059869  530956 logs.go:282] 0 containers: []
	W1212 00:38:28.059876  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:38:28.059881  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:38:28.059946  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:38:28.085318  530956 cri.go:89] found id: ""
	I1212 00:38:28.085332  530956 logs.go:282] 0 containers: []
	W1212 00:38:28.085339  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:38:28.085349  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:38:28.085408  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:38:28.111377  530956 cri.go:89] found id: ""
	I1212 00:38:28.111390  530956 logs.go:282] 0 containers: []
	W1212 00:38:28.111397  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:38:28.111403  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:38:28.111464  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:38:28.140880  530956 cri.go:89] found id: ""
	I1212 00:38:28.140894  530956 logs.go:282] 0 containers: []
	W1212 00:38:28.140910  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:38:28.140915  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:38:28.140985  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:38:28.166928  530956 cri.go:89] found id: ""
	I1212 00:38:28.166943  530956 logs.go:282] 0 containers: []
	W1212 00:38:28.166950  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:38:28.166955  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:38:28.167013  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:38:28.193116  530956 cri.go:89] found id: ""
	I1212 00:38:28.193129  530956 logs.go:282] 0 containers: []
	W1212 00:38:28.193136  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:38:28.193144  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:38:28.193157  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:38:28.207536  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:38:28.207551  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:38:28.273869  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:38:28.265632   13898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:28.266161   13898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:28.267761   13898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:28.268328   13898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:28.269940   13898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:38:28.265632   13898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:28.266161   13898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:28.267761   13898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:28.268328   13898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:28.269940   13898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:38:28.273878  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:38:28.273888  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:38:28.341616  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:38:28.341634  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:38:28.370270  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:38:28.370286  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:38:30.938812  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:38:30.948944  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:38:30.949000  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:38:30.977305  530956 cri.go:89] found id: ""
	I1212 00:38:30.977320  530956 logs.go:282] 0 containers: []
	W1212 00:38:30.977327  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:38:30.977333  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:38:30.977393  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:38:31.004773  530956 cri.go:89] found id: ""
	I1212 00:38:31.004793  530956 logs.go:282] 0 containers: []
	W1212 00:38:31.004802  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:38:31.004807  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:38:31.004878  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:38:31.034217  530956 cri.go:89] found id: ""
	I1212 00:38:31.034231  530956 logs.go:282] 0 containers: []
	W1212 00:38:31.034238  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:38:31.034243  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:38:31.034299  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:38:31.059299  530956 cri.go:89] found id: ""
	I1212 00:38:31.059313  530956 logs.go:282] 0 containers: []
	W1212 00:38:31.059320  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:38:31.059325  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:38:31.059389  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:38:31.085777  530956 cri.go:89] found id: ""
	I1212 00:38:31.085794  530956 logs.go:282] 0 containers: []
	W1212 00:38:31.085801  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:38:31.085806  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:38:31.085870  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:38:31.113432  530956 cri.go:89] found id: ""
	I1212 00:38:31.113445  530956 logs.go:282] 0 containers: []
	W1212 00:38:31.113453  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:38:31.113458  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:38:31.113517  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:38:31.140290  530956 cri.go:89] found id: ""
	I1212 00:38:31.140303  530956 logs.go:282] 0 containers: []
	W1212 00:38:31.140310  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:38:31.140318  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:38:31.140329  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:38:31.170079  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:38:31.170095  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:38:31.237344  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:38:31.237366  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:38:31.252705  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:38:31.252722  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:38:31.314201  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:38:31.305835   14019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:31.306554   14019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:31.308300   14019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:31.308814   14019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:31.310412   14019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:38:31.305835   14019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:31.306554   14019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:31.308300   14019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:31.308814   14019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:31.310412   14019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:38:31.314211  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:38:31.314222  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:38:33.887992  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:38:33.897911  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:38:33.897978  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:38:33.922473  530956 cri.go:89] found id: ""
	I1212 00:38:33.922487  530956 logs.go:282] 0 containers: []
	W1212 00:38:33.922494  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:38:33.922499  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:38:33.922556  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:38:33.947695  530956 cri.go:89] found id: ""
	I1212 00:38:33.947709  530956 logs.go:282] 0 containers: []
	W1212 00:38:33.947716  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:38:33.947720  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:38:33.947779  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:38:33.975167  530956 cri.go:89] found id: ""
	I1212 00:38:33.975181  530956 logs.go:282] 0 containers: []
	W1212 00:38:33.975188  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:38:33.975194  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:38:33.975256  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:38:33.999707  530956 cri.go:89] found id: ""
	I1212 00:38:33.999722  530956 logs.go:282] 0 containers: []
	W1212 00:38:33.999731  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:38:33.999736  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:38:33.999806  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:38:34.028202  530956 cri.go:89] found id: ""
	I1212 00:38:34.028216  530956 logs.go:282] 0 containers: []
	W1212 00:38:34.028224  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:38:34.028229  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:38:34.028289  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:38:34.053144  530956 cri.go:89] found id: ""
	I1212 00:38:34.053158  530956 logs.go:282] 0 containers: []
	W1212 00:38:34.053169  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:38:34.053175  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:38:34.053239  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:38:34.080035  530956 cri.go:89] found id: ""
	I1212 00:38:34.080050  530956 logs.go:282] 0 containers: []
	W1212 00:38:34.080058  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:38:34.080066  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:38:34.080076  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:38:34.146175  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:38:34.146192  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:38:34.160652  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:38:34.160668  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:38:34.223173  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:38:34.215210   14114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:34.216058   14114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:34.217521   14114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:34.217956   14114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:34.219415   14114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:38:34.215210   14114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:34.216058   14114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:34.217521   14114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:34.217956   14114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:34.219415   14114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:38:34.223184  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:38:34.223194  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:38:34.292571  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:38:34.292590  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:38:36.820393  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:38:36.830345  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:38:36.830406  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:38:36.854187  530956 cri.go:89] found id: ""
	I1212 00:38:36.854201  530956 logs.go:282] 0 containers: []
	W1212 00:38:36.854208  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:38:36.854213  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:38:36.854268  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:38:36.882747  530956 cri.go:89] found id: ""
	I1212 00:38:36.882767  530956 logs.go:282] 0 containers: []
	W1212 00:38:36.882774  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:38:36.882779  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:38:36.882836  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:38:36.909295  530956 cri.go:89] found id: ""
	I1212 00:38:36.909310  530956 logs.go:282] 0 containers: []
	W1212 00:38:36.909317  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:38:36.909321  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:38:36.909380  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:38:36.939718  530956 cri.go:89] found id: ""
	I1212 00:38:36.939732  530956 logs.go:282] 0 containers: []
	W1212 00:38:36.939739  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:38:36.939745  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:38:36.939805  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:38:36.985049  530956 cri.go:89] found id: ""
	I1212 00:38:36.985063  530956 logs.go:282] 0 containers: []
	W1212 00:38:36.985070  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:38:36.985075  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:38:36.985135  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:38:37.018069  530956 cri.go:89] found id: ""
	I1212 00:38:37.018092  530956 logs.go:282] 0 containers: []
	W1212 00:38:37.018101  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:38:37.018107  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:38:37.018197  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:38:37.045321  530956 cri.go:89] found id: ""
	I1212 00:38:37.045335  530956 logs.go:282] 0 containers: []
	W1212 00:38:37.045342  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:38:37.045349  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:38:37.045366  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:38:37.110695  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:38:37.110716  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:38:37.125484  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:38:37.125500  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:38:37.191768  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:38:37.183160   14219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:37.184307   14219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:37.184933   14219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:37.186530   14219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:37.186898   14219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:38:37.183160   14219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:37.184307   14219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:37.184933   14219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:37.186530   14219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:37.186898   14219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:38:37.191778  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:38:37.191789  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:38:37.258979  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:38:37.258998  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:38:39.789133  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:38:39.799919  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:38:39.799985  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:38:39.825459  530956 cri.go:89] found id: ""
	I1212 00:38:39.825473  530956 logs.go:282] 0 containers: []
	W1212 00:38:39.825481  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:38:39.825487  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:38:39.825550  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:38:39.853725  530956 cri.go:89] found id: ""
	I1212 00:38:39.853741  530956 logs.go:282] 0 containers: []
	W1212 00:38:39.853750  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:38:39.853757  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:38:39.853833  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:38:39.879329  530956 cri.go:89] found id: ""
	I1212 00:38:39.879343  530956 logs.go:282] 0 containers: []
	W1212 00:38:39.879350  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:38:39.879355  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:38:39.879417  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:38:39.910098  530956 cri.go:89] found id: ""
	I1212 00:38:39.910111  530956 logs.go:282] 0 containers: []
	W1212 00:38:39.910118  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:38:39.910124  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:38:39.910184  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:38:39.940693  530956 cri.go:89] found id: ""
	I1212 00:38:39.940707  530956 logs.go:282] 0 containers: []
	W1212 00:38:39.940714  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:38:39.940719  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:38:39.940779  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:38:39.967072  530956 cri.go:89] found id: ""
	I1212 00:38:39.967085  530956 logs.go:282] 0 containers: []
	W1212 00:38:39.967093  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:38:39.967099  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:38:39.967165  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:38:39.992659  530956 cri.go:89] found id: ""
	I1212 00:38:39.992672  530956 logs.go:282] 0 containers: []
	W1212 00:38:39.992680  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:38:39.992687  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:38:39.992697  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:38:40.113165  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:38:40.113185  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:38:40.130134  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:38:40.130150  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:38:40.200442  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:38:40.191596   14326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:40.192392   14326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:40.194267   14326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:40.194628   14326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:40.196363   14326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:38:40.191596   14326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:40.192392   14326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:40.194267   14326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:40.194628   14326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:40.196363   14326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:38:40.200453  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:38:40.200463  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:38:40.271707  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:38:40.271728  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:38:42.801953  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:38:42.811892  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:38:42.811958  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:38:42.841306  530956 cri.go:89] found id: ""
	I1212 00:38:42.841320  530956 logs.go:282] 0 containers: []
	W1212 00:38:42.841328  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:38:42.841334  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:38:42.841395  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:38:42.869294  530956 cri.go:89] found id: ""
	I1212 00:38:42.869308  530956 logs.go:282] 0 containers: []
	W1212 00:38:42.869314  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:38:42.869319  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:38:42.869381  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:38:42.898367  530956 cri.go:89] found id: ""
	I1212 00:38:42.898381  530956 logs.go:282] 0 containers: []
	W1212 00:38:42.898388  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:38:42.898393  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:38:42.898454  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:38:42.925039  530956 cri.go:89] found id: ""
	I1212 00:38:42.925052  530956 logs.go:282] 0 containers: []
	W1212 00:38:42.925059  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:38:42.925065  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:38:42.925125  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:38:42.955313  530956 cri.go:89] found id: ""
	I1212 00:38:42.955327  530956 logs.go:282] 0 containers: []
	W1212 00:38:42.955334  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:38:42.955339  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:38:42.955404  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:38:42.979722  530956 cri.go:89] found id: ""
	I1212 00:38:42.979735  530956 logs.go:282] 0 containers: []
	W1212 00:38:42.979742  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:38:42.979747  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:38:42.979808  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:38:43.027955  530956 cri.go:89] found id: ""
	I1212 00:38:43.027969  530956 logs.go:282] 0 containers: []
	W1212 00:38:43.027976  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:38:43.027983  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:38:43.027996  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:38:43.043222  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:38:43.043240  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:38:43.111269  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:38:43.102010   14428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:43.103597   14428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:43.104461   14428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:43.105967   14428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:43.106269   14428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:38:43.102010   14428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:43.103597   14428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:43.104461   14428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:43.105967   14428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:43.106269   14428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:38:43.111321  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:38:43.111331  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:38:43.177977  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:38:43.177997  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:38:43.206880  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:38:43.206895  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:38:45.775312  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:38:45.785672  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:38:45.785736  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:38:45.811375  530956 cri.go:89] found id: ""
	I1212 00:38:45.811389  530956 logs.go:282] 0 containers: []
	W1212 00:38:45.811396  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:38:45.811400  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:38:45.811459  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:38:45.836941  530956 cri.go:89] found id: ""
	I1212 00:38:45.836956  530956 logs.go:282] 0 containers: []
	W1212 00:38:45.836963  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:38:45.836968  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:38:45.837031  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:38:45.863375  530956 cri.go:89] found id: ""
	I1212 00:38:45.863389  530956 logs.go:282] 0 containers: []
	W1212 00:38:45.863396  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:38:45.863402  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:38:45.863461  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:38:45.888628  530956 cri.go:89] found id: ""
	I1212 00:38:45.888641  530956 logs.go:282] 0 containers: []
	W1212 00:38:45.888648  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:38:45.888654  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:38:45.888712  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:38:45.917199  530956 cri.go:89] found id: ""
	I1212 00:38:45.917213  530956 logs.go:282] 0 containers: []
	W1212 00:38:45.917221  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:38:45.917226  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:38:45.917289  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:38:45.944008  530956 cri.go:89] found id: ""
	I1212 00:38:45.944022  530956 logs.go:282] 0 containers: []
	W1212 00:38:45.944029  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:38:45.944034  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:38:45.944093  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:38:45.968971  530956 cri.go:89] found id: ""
	I1212 00:38:45.968984  530956 logs.go:282] 0 containers: []
	W1212 00:38:45.968992  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:38:45.969000  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:38:45.969010  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:38:46.034356  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:38:46.034375  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:38:46.048756  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:38:46.048771  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:38:46.115073  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:38:46.106286   14535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:46.107154   14535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:46.108746   14535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:46.109165   14535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:46.110638   14535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:38:46.106286   14535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:46.107154   14535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:46.108746   14535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:46.109165   14535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:46.110638   14535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:38:46.115096  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:38:46.115107  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:38:46.182387  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:38:46.182407  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:38:48.712482  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:38:48.722635  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:38:48.722715  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:38:48.752202  530956 cri.go:89] found id: ""
	I1212 00:38:48.752215  530956 logs.go:282] 0 containers: []
	W1212 00:38:48.752222  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:38:48.752227  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:38:48.752287  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:38:48.779084  530956 cri.go:89] found id: ""
	I1212 00:38:48.779097  530956 logs.go:282] 0 containers: []
	W1212 00:38:48.779105  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:38:48.779110  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:38:48.779165  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:38:48.803352  530956 cri.go:89] found id: ""
	I1212 00:38:48.803366  530956 logs.go:282] 0 containers: []
	W1212 00:38:48.803375  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:38:48.803380  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:38:48.803441  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:38:48.829635  530956 cri.go:89] found id: ""
	I1212 00:38:48.829649  530956 logs.go:282] 0 containers: []
	W1212 00:38:48.829656  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:38:48.829661  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:38:48.829720  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:38:48.854311  530956 cri.go:89] found id: ""
	I1212 00:38:48.854324  530956 logs.go:282] 0 containers: []
	W1212 00:38:48.854332  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:38:48.854337  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:38:48.854394  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:38:48.879369  530956 cri.go:89] found id: ""
	I1212 00:38:48.879383  530956 logs.go:282] 0 containers: []
	W1212 00:38:48.879390  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:38:48.879396  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:38:48.879456  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:38:48.908110  530956 cri.go:89] found id: ""
	I1212 00:38:48.908124  530956 logs.go:282] 0 containers: []
	W1212 00:38:48.908131  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:38:48.908138  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:38:48.908151  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:38:48.972035  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:38:48.972053  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:38:48.986646  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:38:48.986668  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:38:49.053589  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:38:49.045696   14637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:49.046251   14637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:49.047754   14637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:49.048291   14637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:49.049804   14637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:38:49.045696   14637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:49.046251   14637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:49.047754   14637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:49.048291   14637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:49.049804   14637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:38:49.053599  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:38:49.053608  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:38:49.123212  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:38:49.123236  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:38:51.651584  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:38:51.662032  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:38:51.662096  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:38:51.687559  530956 cri.go:89] found id: ""
	I1212 00:38:51.687573  530956 logs.go:282] 0 containers: []
	W1212 00:38:51.687580  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:38:51.687586  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:38:51.687655  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:38:51.713801  530956 cri.go:89] found id: ""
	I1212 00:38:51.713828  530956 logs.go:282] 0 containers: []
	W1212 00:38:51.713835  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:38:51.713840  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:38:51.713903  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:38:51.748993  530956 cri.go:89] found id: ""
	I1212 00:38:51.749006  530956 logs.go:282] 0 containers: []
	W1212 00:38:51.749028  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:38:51.749034  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:38:51.749091  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:38:51.777108  530956 cri.go:89] found id: ""
	I1212 00:38:51.777122  530956 logs.go:282] 0 containers: []
	W1212 00:38:51.777129  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:38:51.777135  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:38:51.777200  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:38:51.805174  530956 cri.go:89] found id: ""
	I1212 00:38:51.805188  530956 logs.go:282] 0 containers: []
	W1212 00:38:51.805195  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:38:51.805201  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:38:51.805266  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:38:51.830660  530956 cri.go:89] found id: ""
	I1212 00:38:51.830674  530956 logs.go:282] 0 containers: []
	W1212 00:38:51.830701  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:38:51.830706  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:38:51.830778  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:38:51.855989  530956 cri.go:89] found id: ""
	I1212 00:38:51.856003  530956 logs.go:282] 0 containers: []
	W1212 00:38:51.856017  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:38:51.856024  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:38:51.856035  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:38:51.887241  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:38:51.887257  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:38:51.953055  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:38:51.953075  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:38:51.969638  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:38:51.969660  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:38:52.045683  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:38:52.037116   14756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:52.037541   14756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:52.039334   14756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:52.039776   14756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:52.041452   14756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:38:52.037116   14756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:52.037541   14756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:52.039334   14756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:52.039776   14756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:52.041452   14756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:38:52.045694  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:38:52.045705  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:38:54.617323  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:38:54.627443  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:38:54.627502  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:38:54.651505  530956 cri.go:89] found id: ""
	I1212 00:38:54.651519  530956 logs.go:282] 0 containers: []
	W1212 00:38:54.651526  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:38:54.651532  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:38:54.651589  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:38:54.675935  530956 cri.go:89] found id: ""
	I1212 00:38:54.675961  530956 logs.go:282] 0 containers: []
	W1212 00:38:54.675968  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:38:54.675973  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:38:54.676042  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:38:54.701954  530956 cri.go:89] found id: ""
	I1212 00:38:54.701970  530956 logs.go:282] 0 containers: []
	W1212 00:38:54.701979  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:38:54.701986  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:38:54.702056  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:38:54.733636  530956 cri.go:89] found id: ""
	I1212 00:38:54.733657  530956 logs.go:282] 0 containers: []
	W1212 00:38:54.733666  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:38:54.733671  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:38:54.733742  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:38:54.761858  530956 cri.go:89] found id: ""
	I1212 00:38:54.761885  530956 logs.go:282] 0 containers: []
	W1212 00:38:54.761892  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:38:54.761897  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:38:54.761965  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:38:54.798397  530956 cri.go:89] found id: ""
	I1212 00:38:54.798411  530956 logs.go:282] 0 containers: []
	W1212 00:38:54.798431  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:38:54.798436  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:38:54.798502  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:38:54.823810  530956 cri.go:89] found id: ""
	I1212 00:38:54.823824  530956 logs.go:282] 0 containers: []
	W1212 00:38:54.823831  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:38:54.823840  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:38:54.823850  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:38:54.891230  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:38:54.891249  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:38:54.907075  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:38:54.907092  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:38:54.979081  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:38:54.970178   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:54.971009   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:54.971760   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:54.973599   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:54.973900   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:38:54.970178   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:54.971009   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:54.971760   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:54.973599   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:54.973900   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:38:54.979091  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:38:54.979103  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:38:55.048465  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:38:55.048486  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:38:57.579400  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:38:57.590372  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:38:57.590435  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:38:57.618082  530956 cri.go:89] found id: ""
	I1212 00:38:57.618096  530956 logs.go:282] 0 containers: []
	W1212 00:38:57.618103  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:38:57.618108  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:38:57.618169  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:38:57.644801  530956 cri.go:89] found id: ""
	I1212 00:38:57.644815  530956 logs.go:282] 0 containers: []
	W1212 00:38:57.644822  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:38:57.644827  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:38:57.644886  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:38:57.670018  530956 cri.go:89] found id: ""
	I1212 00:38:57.670032  530956 logs.go:282] 0 containers: []
	W1212 00:38:57.670045  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:38:57.670050  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:38:57.670111  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:38:57.695026  530956 cri.go:89] found id: ""
	I1212 00:38:57.695040  530956 logs.go:282] 0 containers: []
	W1212 00:38:57.695047  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:38:57.695052  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:38:57.695116  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:38:57.726077  530956 cri.go:89] found id: ""
	I1212 00:38:57.726091  530956 logs.go:282] 0 containers: []
	W1212 00:38:57.726098  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:38:57.726103  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:38:57.726182  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:38:57.766280  530956 cri.go:89] found id: ""
	I1212 00:38:57.766295  530956 logs.go:282] 0 containers: []
	W1212 00:38:57.766302  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:38:57.766308  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:38:57.766366  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:38:57.794888  530956 cri.go:89] found id: ""
	I1212 00:38:57.794902  530956 logs.go:282] 0 containers: []
	W1212 00:38:57.794909  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:38:57.794917  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:38:57.794931  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:38:57.861092  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:38:57.861111  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:38:57.876214  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:38:57.876230  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:38:57.943746  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:38:57.933552   14955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:57.934412   14955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:57.936560   14955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:57.937552   14955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:57.938297   14955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:38:57.933552   14955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:57.934412   14955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:57.936560   14955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:57.937552   14955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:57.938297   14955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:38:57.943757  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:38:57.943767  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:38:58.013702  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:38:58.013722  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:39:00.543612  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:39:00.553735  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:39:00.553795  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:39:00.580386  530956 cri.go:89] found id: ""
	I1212 00:39:00.580400  530956 logs.go:282] 0 containers: []
	W1212 00:39:00.580407  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:39:00.580412  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:39:00.580471  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:39:00.608511  530956 cri.go:89] found id: ""
	I1212 00:39:00.608525  530956 logs.go:282] 0 containers: []
	W1212 00:39:00.608532  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:39:00.608537  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:39:00.608594  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:39:00.633613  530956 cri.go:89] found id: ""
	I1212 00:39:00.633627  530956 logs.go:282] 0 containers: []
	W1212 00:39:00.633634  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:39:00.633639  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:39:00.633696  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:39:00.658755  530956 cri.go:89] found id: ""
	I1212 00:39:00.658769  530956 logs.go:282] 0 containers: []
	W1212 00:39:00.658776  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:39:00.658782  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:39:00.658845  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:39:00.688160  530956 cri.go:89] found id: ""
	I1212 00:39:00.688174  530956 logs.go:282] 0 containers: []
	W1212 00:39:00.688181  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:39:00.688187  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:39:00.688246  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:39:00.714115  530956 cri.go:89] found id: ""
	I1212 00:39:00.714129  530956 logs.go:282] 0 containers: []
	W1212 00:39:00.714136  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:39:00.714142  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:39:00.714203  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:39:00.743594  530956 cri.go:89] found id: ""
	I1212 00:39:00.743607  530956 logs.go:282] 0 containers: []
	W1212 00:39:00.743614  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:39:00.743622  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:39:00.743632  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:39:00.825728  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:39:00.825750  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:39:00.840575  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:39:00.840590  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:39:00.904328  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:39:00.896372   15065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:00.896852   15065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:00.898503   15065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:00.898943   15065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:00.900532   15065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:39:00.896372   15065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:00.896852   15065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:00.898503   15065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:00.898943   15065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:00.900532   15065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:39:00.904339  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:39:00.904350  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:39:00.971157  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:39:00.971177  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:39:03.500568  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:39:03.510753  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:39:03.510824  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:39:03.535333  530956 cri.go:89] found id: ""
	I1212 00:39:03.535347  530956 logs.go:282] 0 containers: []
	W1212 00:39:03.535354  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:39:03.535359  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:39:03.535422  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:39:03.560575  530956 cri.go:89] found id: ""
	I1212 00:39:03.560589  530956 logs.go:282] 0 containers: []
	W1212 00:39:03.560597  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:39:03.560602  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:39:03.560659  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:39:03.589048  530956 cri.go:89] found id: ""
	I1212 00:39:03.589062  530956 logs.go:282] 0 containers: []
	W1212 00:39:03.589069  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:39:03.589075  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:39:03.589131  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:39:03.614812  530956 cri.go:89] found id: ""
	I1212 00:39:03.614826  530956 logs.go:282] 0 containers: []
	W1212 00:39:03.614834  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:39:03.614839  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:39:03.614908  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:39:03.641138  530956 cri.go:89] found id: ""
	I1212 00:39:03.641152  530956 logs.go:282] 0 containers: []
	W1212 00:39:03.641158  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:39:03.641164  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:39:03.641221  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:39:03.669855  530956 cri.go:89] found id: ""
	I1212 00:39:03.669869  530956 logs.go:282] 0 containers: []
	W1212 00:39:03.669876  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:39:03.669884  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:39:03.669943  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:39:03.694625  530956 cri.go:89] found id: ""
	I1212 00:39:03.694650  530956 logs.go:282] 0 containers: []
	W1212 00:39:03.694657  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:39:03.694665  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:39:03.694676  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:39:03.761872  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:39:03.761891  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:39:03.777581  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:39:03.777598  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:39:03.843774  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:39:03.835704   15173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:03.836242   15173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:03.838010   15173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:03.838382   15173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:03.839850   15173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:39:03.835704   15173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:03.836242   15173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:03.838010   15173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:03.838382   15173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:03.839850   15173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:39:03.843783  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:39:03.843793  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:39:03.914951  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:39:03.914977  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:39:06.443917  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:39:06.454370  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:39:06.454434  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:39:06.482109  530956 cri.go:89] found id: ""
	I1212 00:39:06.482123  530956 logs.go:282] 0 containers: []
	W1212 00:39:06.482131  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:39:06.482136  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:39:06.482199  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:39:06.509716  530956 cri.go:89] found id: ""
	I1212 00:39:06.509730  530956 logs.go:282] 0 containers: []
	W1212 00:39:06.509737  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:39:06.509742  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:39:06.509800  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:39:06.537521  530956 cri.go:89] found id: ""
	I1212 00:39:06.537535  530956 logs.go:282] 0 containers: []
	W1212 00:39:06.537542  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:39:06.537548  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:39:06.537606  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:39:06.562757  530956 cri.go:89] found id: ""
	I1212 00:39:06.562770  530956 logs.go:282] 0 containers: []
	W1212 00:39:06.562778  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:39:06.562783  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:39:06.562842  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:39:06.587417  530956 cri.go:89] found id: ""
	I1212 00:39:06.587431  530956 logs.go:282] 0 containers: []
	W1212 00:39:06.587439  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:39:06.587443  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:39:06.587507  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:39:06.612775  530956 cri.go:89] found id: ""
	I1212 00:39:06.612789  530956 logs.go:282] 0 containers: []
	W1212 00:39:06.612797  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:39:06.612804  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:39:06.612864  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:39:06.637360  530956 cri.go:89] found id: ""
	I1212 00:39:06.637374  530956 logs.go:282] 0 containers: []
	W1212 00:39:06.637382  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:39:06.637389  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:39:06.637400  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:39:06.651687  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:39:06.651703  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:39:06.714510  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:39:06.706351   15271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:06.706972   15271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:06.708728   15271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:06.709213   15271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:06.710650   15271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:39:06.706351   15271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:06.706972   15271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:06.708728   15271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:06.709213   15271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:06.710650   15271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:39:06.714521  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:39:06.714531  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:39:06.793242  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:39:06.793263  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:39:06.825153  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:39:06.825170  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:39:09.391589  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:39:09.401762  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:39:09.401823  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:39:09.426113  530956 cri.go:89] found id: ""
	I1212 00:39:09.426127  530956 logs.go:282] 0 containers: []
	W1212 00:39:09.426135  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:39:09.426139  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:39:09.426197  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:39:09.455495  530956 cri.go:89] found id: ""
	I1212 00:39:09.455509  530956 logs.go:282] 0 containers: []
	W1212 00:39:09.455522  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:39:09.455527  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:39:09.455586  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:39:09.484947  530956 cri.go:89] found id: ""
	I1212 00:39:09.484961  530956 logs.go:282] 0 containers: []
	W1212 00:39:09.484969  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:39:09.484975  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:39:09.485038  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:39:09.510850  530956 cri.go:89] found id: ""
	I1212 00:39:09.510865  530956 logs.go:282] 0 containers: []
	W1212 00:39:09.510873  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:39:09.510878  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:39:09.510936  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:39:09.536933  530956 cri.go:89] found id: ""
	I1212 00:39:09.536955  530956 logs.go:282] 0 containers: []
	W1212 00:39:09.536963  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:39:09.536968  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:39:09.537038  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:39:09.565308  530956 cri.go:89] found id: ""
	I1212 00:39:09.565321  530956 logs.go:282] 0 containers: []
	W1212 00:39:09.565328  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:39:09.565333  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:39:09.565391  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:39:09.596694  530956 cri.go:89] found id: ""
	I1212 00:39:09.596708  530956 logs.go:282] 0 containers: []
	W1212 00:39:09.596716  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:39:09.596724  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:39:09.596734  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:39:09.661768  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:39:09.661787  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:39:09.676496  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:39:09.676512  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:39:09.751036  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:39:09.740810   15379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:09.741527   15379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:09.743548   15379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:09.744409   15379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:09.746279   15379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:39:09.740810   15379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:09.741527   15379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:09.743548   15379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:09.744409   15379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:09.746279   15379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:39:09.751057  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:39:09.751069  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:39:09.831885  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:39:09.831905  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:39:12.361885  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:39:12.371912  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:39:12.371972  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:39:12.400852  530956 cri.go:89] found id: ""
	I1212 00:39:12.400867  530956 logs.go:282] 0 containers: []
	W1212 00:39:12.400874  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:39:12.400879  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:39:12.400939  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:39:12.426229  530956 cri.go:89] found id: ""
	I1212 00:39:12.426244  530956 logs.go:282] 0 containers: []
	W1212 00:39:12.426251  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:39:12.426256  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:39:12.426313  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:39:12.455450  530956 cri.go:89] found id: ""
	I1212 00:39:12.455465  530956 logs.go:282] 0 containers: []
	W1212 00:39:12.455472  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:39:12.455477  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:39:12.455542  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:39:12.480339  530956 cri.go:89] found id: ""
	I1212 00:39:12.480353  530956 logs.go:282] 0 containers: []
	W1212 00:39:12.480360  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:39:12.480365  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:39:12.480425  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:39:12.508098  530956 cri.go:89] found id: ""
	I1212 00:39:12.508112  530956 logs.go:282] 0 containers: []
	W1212 00:39:12.508119  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:39:12.508124  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:39:12.508185  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:39:12.534232  530956 cri.go:89] found id: ""
	I1212 00:39:12.534246  530956 logs.go:282] 0 containers: []
	W1212 00:39:12.534253  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:39:12.534259  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:39:12.534318  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:39:12.564030  530956 cri.go:89] found id: ""
	I1212 00:39:12.564045  530956 logs.go:282] 0 containers: []
	W1212 00:39:12.564053  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:39:12.564061  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:39:12.564072  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:39:12.578300  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:39:12.578315  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:39:12.645692  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:39:12.637241   15484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:12.637958   15484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:12.639484   15484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:12.639902   15484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:12.641511   15484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:39:12.637241   15484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:12.637958   15484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:12.639484   15484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:12.639902   15484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:12.641511   15484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:39:12.645702  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:39:12.645714  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:39:12.716817  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:39:12.716835  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:39:12.755607  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:39:12.755622  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:39:15.328461  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:39:15.338656  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:39:15.338747  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:39:15.368754  530956 cri.go:89] found id: ""
	I1212 00:39:15.368768  530956 logs.go:282] 0 containers: []
	W1212 00:39:15.368775  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:39:15.368780  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:39:15.368839  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:39:15.395430  530956 cri.go:89] found id: ""
	I1212 00:39:15.395444  530956 logs.go:282] 0 containers: []
	W1212 00:39:15.395451  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:39:15.395456  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:39:15.395522  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:39:15.420901  530956 cri.go:89] found id: ""
	I1212 00:39:15.420922  530956 logs.go:282] 0 containers: []
	W1212 00:39:15.420930  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:39:15.420935  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:39:15.420996  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:39:15.446341  530956 cri.go:89] found id: ""
	I1212 00:39:15.446355  530956 logs.go:282] 0 containers: []
	W1212 00:39:15.446362  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:39:15.446367  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:39:15.446425  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:39:15.472134  530956 cri.go:89] found id: ""
	I1212 00:39:15.472148  530956 logs.go:282] 0 containers: []
	W1212 00:39:15.472155  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:39:15.472160  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:39:15.472224  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:39:15.499707  530956 cri.go:89] found id: ""
	I1212 00:39:15.499721  530956 logs.go:282] 0 containers: []
	W1212 00:39:15.499729  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:39:15.499734  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:39:15.499803  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:39:15.525097  530956 cri.go:89] found id: ""
	I1212 00:39:15.525111  530956 logs.go:282] 0 containers: []
	W1212 00:39:15.525119  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:39:15.525126  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:39:15.525141  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:39:15.591570  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:39:15.591589  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:39:15.606307  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:39:15.606323  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:39:15.671615  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:39:15.663912   15590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:15.664737   15590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:15.666223   15590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:15.666722   15590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:15.668013   15590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:39:15.663912   15590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:15.664737   15590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:15.666223   15590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:15.666722   15590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:15.668013   15590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:39:15.671625  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:39:15.671640  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:39:15.740633  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:39:15.740680  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:39:18.284352  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:39:18.294497  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:39:18.294570  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:39:18.320150  530956 cri.go:89] found id: ""
	I1212 00:39:18.320164  530956 logs.go:282] 0 containers: []
	W1212 00:39:18.320173  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:39:18.320178  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:39:18.320236  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:39:18.346472  530956 cri.go:89] found id: ""
	I1212 00:39:18.346486  530956 logs.go:282] 0 containers: []
	W1212 00:39:18.346493  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:39:18.346498  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:39:18.346556  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:39:18.377328  530956 cri.go:89] found id: ""
	I1212 00:39:18.377342  530956 logs.go:282] 0 containers: []
	W1212 00:39:18.377349  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:39:18.377354  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:39:18.377411  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:39:18.402792  530956 cri.go:89] found id: ""
	I1212 00:39:18.402813  530956 logs.go:282] 0 containers: []
	W1212 00:39:18.402820  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:39:18.402826  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:39:18.402889  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:39:18.433183  530956 cri.go:89] found id: ""
	I1212 00:39:18.433198  530956 logs.go:282] 0 containers: []
	W1212 00:39:18.433205  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:39:18.433210  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:39:18.433272  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:39:18.458993  530956 cri.go:89] found id: ""
	I1212 00:39:18.459007  530956 logs.go:282] 0 containers: []
	W1212 00:39:18.459015  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:39:18.459020  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:39:18.459082  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:39:18.483237  530956 cri.go:89] found id: ""
	I1212 00:39:18.483251  530956 logs.go:282] 0 containers: []
	W1212 00:39:18.483258  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:39:18.483267  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:39:18.483276  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:39:18.549785  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:39:18.549803  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:39:18.564675  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:39:18.564692  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:39:18.635252  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:39:18.622293   15699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:18.627996   15699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:18.628725   15699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:18.629829   15699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:18.630310   15699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:39:18.622293   15699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:18.627996   15699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:18.628725   15699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:18.629829   15699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:18.630310   15699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:39:18.635261  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:39:18.635271  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:39:18.704032  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:39:18.704054  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:39:21.245504  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:39:21.256336  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:39:21.256398  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:39:21.282849  530956 cri.go:89] found id: ""
	I1212 00:39:21.282863  530956 logs.go:282] 0 containers: []
	W1212 00:39:21.282871  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:39:21.282878  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:39:21.282936  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:39:21.309330  530956 cri.go:89] found id: ""
	I1212 00:39:21.309344  530956 logs.go:282] 0 containers: []
	W1212 00:39:21.309351  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:39:21.309359  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:39:21.309419  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:39:21.338973  530956 cri.go:89] found id: ""
	I1212 00:39:21.338986  530956 logs.go:282] 0 containers: []
	W1212 00:39:21.338994  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:39:21.338999  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:39:21.339064  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:39:21.366261  530956 cri.go:89] found id: ""
	I1212 00:39:21.366275  530956 logs.go:282] 0 containers: []
	W1212 00:39:21.366282  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:39:21.366287  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:39:21.366346  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:39:21.393801  530956 cri.go:89] found id: ""
	I1212 00:39:21.393815  530956 logs.go:282] 0 containers: []
	W1212 00:39:21.393822  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:39:21.393827  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:39:21.393888  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:39:21.418339  530956 cri.go:89] found id: ""
	I1212 00:39:21.418353  530956 logs.go:282] 0 containers: []
	W1212 00:39:21.418360  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:39:21.418365  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:39:21.418425  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:39:21.443336  530956 cri.go:89] found id: ""
	I1212 00:39:21.443350  530956 logs.go:282] 0 containers: []
	W1212 00:39:21.443356  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:39:21.443364  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:39:21.443375  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:39:21.470973  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:39:21.470988  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:39:21.540182  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:39:21.540203  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:39:21.554835  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:39:21.554851  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:39:21.618440  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:39:21.609987   15813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:21.610798   15813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:21.612355   15813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:21.612867   15813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:21.614445   15813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:39:21.609987   15813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:21.610798   15813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:21.612355   15813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:21.612867   15813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:21.614445   15813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:39:21.618450  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:39:21.618460  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:39:24.186363  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:39:24.196446  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:39:24.196514  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:39:24.228176  530956 cri.go:89] found id: ""
	I1212 00:39:24.228189  530956 logs.go:282] 0 containers: []
	W1212 00:39:24.228196  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:39:24.228201  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:39:24.228263  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:39:24.252432  530956 cri.go:89] found id: ""
	I1212 00:39:24.252446  530956 logs.go:282] 0 containers: []
	W1212 00:39:24.252453  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:39:24.252458  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:39:24.252517  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:39:24.277088  530956 cri.go:89] found id: ""
	I1212 00:39:24.277102  530956 logs.go:282] 0 containers: []
	W1212 00:39:24.277109  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:39:24.277113  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:39:24.277172  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:39:24.301976  530956 cri.go:89] found id: ""
	I1212 00:39:24.301989  530956 logs.go:282] 0 containers: []
	W1212 00:39:24.301996  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:39:24.302001  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:39:24.302058  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:39:24.326771  530956 cri.go:89] found id: ""
	I1212 00:39:24.326785  530956 logs.go:282] 0 containers: []
	W1212 00:39:24.326792  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:39:24.326797  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:39:24.326858  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:39:24.352740  530956 cri.go:89] found id: ""
	I1212 00:39:24.352754  530956 logs.go:282] 0 containers: []
	W1212 00:39:24.352761  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:39:24.352766  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:39:24.352825  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:39:24.379469  530956 cri.go:89] found id: ""
	I1212 00:39:24.379483  530956 logs.go:282] 0 containers: []
	W1212 00:39:24.379490  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:39:24.379498  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:39:24.379508  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:39:24.407400  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:39:24.407417  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:39:24.473931  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:39:24.473951  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:39:24.488478  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:39:24.488494  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:39:24.552073  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:39:24.544053   15923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:24.544795   15923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:24.546281   15923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:24.546877   15923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:24.548351   15923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:39:24.544053   15923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:24.544795   15923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:24.546281   15923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:24.546877   15923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:24.548351   15923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:39:24.552083  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:39:24.552093  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:39:27.124323  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:39:27.134160  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:39:27.134218  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:39:27.161224  530956 cri.go:89] found id: ""
	I1212 00:39:27.161239  530956 logs.go:282] 0 containers: []
	W1212 00:39:27.161247  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:39:27.161253  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:39:27.161317  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:39:27.185561  530956 cri.go:89] found id: ""
	I1212 00:39:27.185575  530956 logs.go:282] 0 containers: []
	W1212 00:39:27.185582  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:39:27.185587  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:39:27.185647  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:39:27.212949  530956 cri.go:89] found id: ""
	I1212 00:39:27.212962  530956 logs.go:282] 0 containers: []
	W1212 00:39:27.212969  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:39:27.212974  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:39:27.213035  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:39:27.237907  530956 cri.go:89] found id: ""
	I1212 00:39:27.237921  530956 logs.go:282] 0 containers: []
	W1212 00:39:27.237928  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:39:27.237933  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:39:27.237991  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:39:27.264773  530956 cri.go:89] found id: ""
	I1212 00:39:27.264787  530956 logs.go:282] 0 containers: []
	W1212 00:39:27.264794  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:39:27.264799  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:39:27.264858  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:39:27.290448  530956 cri.go:89] found id: ""
	I1212 00:39:27.290462  530956 logs.go:282] 0 containers: []
	W1212 00:39:27.290469  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:39:27.290474  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:39:27.290531  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:39:27.315823  530956 cri.go:89] found id: ""
	I1212 00:39:27.315837  530956 logs.go:282] 0 containers: []
	W1212 00:39:27.315844  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:39:27.315852  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:39:27.315863  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:39:27.389757  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:39:27.389777  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:39:27.422043  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:39:27.422059  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:39:27.492490  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:39:27.492509  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:39:27.507777  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:39:27.507793  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:39:27.571981  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:39:27.563800   16033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:27.564533   16033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:27.566031   16033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:27.566607   16033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:27.568082   16033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:39:27.563800   16033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:27.564533   16033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:27.566031   16033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:27.566607   16033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:27.568082   16033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:39:30.074632  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:39:30.089373  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:39:30.089465  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:39:30.125906  530956 cri.go:89] found id: ""
	I1212 00:39:30.125923  530956 logs.go:282] 0 containers: []
	W1212 00:39:30.125931  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:39:30.125939  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:39:30.126019  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:39:30.159780  530956 cri.go:89] found id: ""
	I1212 00:39:30.159796  530956 logs.go:282] 0 containers: []
	W1212 00:39:30.159804  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:39:30.159810  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:39:30.159878  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:39:30.186451  530956 cri.go:89] found id: ""
	I1212 00:39:30.186466  530956 logs.go:282] 0 containers: []
	W1212 00:39:30.186473  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:39:30.186478  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:39:30.186541  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:39:30.212831  530956 cri.go:89] found id: ""
	I1212 00:39:30.212846  530956 logs.go:282] 0 containers: []
	W1212 00:39:30.212859  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:39:30.212864  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:39:30.212926  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:39:30.239897  530956 cri.go:89] found id: ""
	I1212 00:39:30.239912  530956 logs.go:282] 0 containers: []
	W1212 00:39:30.239919  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:39:30.239924  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:39:30.239987  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:39:30.265595  530956 cri.go:89] found id: ""
	I1212 00:39:30.265610  530956 logs.go:282] 0 containers: []
	W1212 00:39:30.265618  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:39:30.265623  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:39:30.265684  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:39:30.293057  530956 cri.go:89] found id: ""
	I1212 00:39:30.293072  530956 logs.go:282] 0 containers: []
	W1212 00:39:30.293079  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:39:30.293087  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:39:30.293098  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:39:30.360384  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:39:30.360403  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:39:30.375514  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:39:30.375533  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:39:30.445622  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:39:30.436678   16126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:30.437400   16126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:30.439405   16126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:30.440010   16126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:30.441699   16126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:39:30.436678   16126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:30.437400   16126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:30.439405   16126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:30.440010   16126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:30.441699   16126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:39:30.445632  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:39:30.445642  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:39:30.514984  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:39:30.515002  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:39:33.046486  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:39:33.057328  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:39:33.057389  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:39:33.082571  530956 cri.go:89] found id: ""
	I1212 00:39:33.082584  530956 logs.go:282] 0 containers: []
	W1212 00:39:33.082592  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:39:33.082597  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:39:33.082656  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:39:33.107156  530956 cri.go:89] found id: ""
	I1212 00:39:33.107169  530956 logs.go:282] 0 containers: []
	W1212 00:39:33.107176  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:39:33.107181  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:39:33.107242  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:39:33.132433  530956 cri.go:89] found id: ""
	I1212 00:39:33.132448  530956 logs.go:282] 0 containers: []
	W1212 00:39:33.132456  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:39:33.132460  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:39:33.132524  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:39:33.158141  530956 cri.go:89] found id: ""
	I1212 00:39:33.158155  530956 logs.go:282] 0 containers: []
	W1212 00:39:33.158162  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:39:33.158167  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:39:33.158229  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:39:33.185335  530956 cri.go:89] found id: ""
	I1212 00:39:33.185350  530956 logs.go:282] 0 containers: []
	W1212 00:39:33.185357  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:39:33.185362  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:39:33.185423  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:39:33.214702  530956 cri.go:89] found id: ""
	I1212 00:39:33.214716  530956 logs.go:282] 0 containers: []
	W1212 00:39:33.214731  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:39:33.214738  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:39:33.214798  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:39:33.239415  530956 cri.go:89] found id: ""
	I1212 00:39:33.239429  530956 logs.go:282] 0 containers: []
	W1212 00:39:33.239436  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:39:33.239444  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:39:33.239462  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:39:33.303881  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:39:33.303900  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:39:33.318306  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:39:33.318324  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:39:33.385940  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:39:33.376699   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:33.377337   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:33.379127   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:33.379736   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:33.381336   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:39:33.376699   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:33.377337   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:33.379127   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:33.379736   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:33.381336   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:39:33.385950  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:39:33.385961  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:39:33.453867  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:39:33.453884  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:39:35.983022  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:39:35.993721  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:39:35.993785  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:39:36.032639  530956 cri.go:89] found id: ""
	I1212 00:39:36.032654  530956 logs.go:282] 0 containers: []
	W1212 00:39:36.032662  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:39:36.032667  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:39:36.032737  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:39:36.069795  530956 cri.go:89] found id: ""
	I1212 00:39:36.069810  530956 logs.go:282] 0 containers: []
	W1212 00:39:36.069817  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:39:36.069822  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:39:36.069882  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:39:36.099096  530956 cri.go:89] found id: ""
	I1212 00:39:36.099111  530956 logs.go:282] 0 containers: []
	W1212 00:39:36.099118  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:39:36.099124  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:39:36.099184  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:39:36.128685  530956 cri.go:89] found id: ""
	I1212 00:39:36.128699  530956 logs.go:282] 0 containers: []
	W1212 00:39:36.128706  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:39:36.128711  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:39:36.128772  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:39:36.154641  530956 cri.go:89] found id: ""
	I1212 00:39:36.154654  530956 logs.go:282] 0 containers: []
	W1212 00:39:36.154662  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:39:36.154666  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:39:36.154762  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:39:36.179316  530956 cri.go:89] found id: ""
	I1212 00:39:36.179330  530956 logs.go:282] 0 containers: []
	W1212 00:39:36.179338  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:39:36.179343  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:39:36.179402  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:39:36.205036  530956 cri.go:89] found id: ""
	I1212 00:39:36.205050  530956 logs.go:282] 0 containers: []
	W1212 00:39:36.205057  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:39:36.205066  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:39:36.205079  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:39:36.271067  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:39:36.271086  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:39:36.285990  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:39:36.286006  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:39:36.350986  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:39:36.343284   16330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:36.343819   16330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:36.345314   16330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:36.345743   16330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:36.347205   16330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:39:36.343284   16330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:36.343819   16330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:36.345314   16330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:36.345743   16330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:36.347205   16330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:39:36.350996  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:39:36.351005  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:39:36.418783  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:39:36.418803  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:39:38.948706  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:39:38.958630  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:39:38.958705  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:39:38.988268  530956 cri.go:89] found id: ""
	I1212 00:39:38.988282  530956 logs.go:282] 0 containers: []
	W1212 00:39:38.988289  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:39:38.988294  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:39:38.988372  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:39:39.017066  530956 cri.go:89] found id: ""
	I1212 00:39:39.017088  530956 logs.go:282] 0 containers: []
	W1212 00:39:39.017095  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:39:39.017100  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:39:39.017158  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:39:39.044203  530956 cri.go:89] found id: ""
	I1212 00:39:39.044217  530956 logs.go:282] 0 containers: []
	W1212 00:39:39.044223  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:39:39.044232  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:39:39.044293  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:39:39.073574  530956 cri.go:89] found id: ""
	I1212 00:39:39.073588  530956 logs.go:282] 0 containers: []
	W1212 00:39:39.073595  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:39:39.073600  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:39:39.073658  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:39:39.098254  530956 cri.go:89] found id: ""
	I1212 00:39:39.098267  530956 logs.go:282] 0 containers: []
	W1212 00:39:39.098274  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:39:39.098279  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:39:39.098338  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:39:39.122552  530956 cri.go:89] found id: ""
	I1212 00:39:39.122566  530956 logs.go:282] 0 containers: []
	W1212 00:39:39.122573  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:39:39.122578  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:39:39.122641  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:39:39.149933  530956 cri.go:89] found id: ""
	I1212 00:39:39.149947  530956 logs.go:282] 0 containers: []
	W1212 00:39:39.149954  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:39:39.149961  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:39:39.149972  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:39:39.164970  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:39:39.164986  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:39:39.228249  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:39:39.219299   16431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:39.219833   16431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:39.221740   16431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:39.222278   16431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:39.223944   16431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:39:39.219299   16431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:39.219833   16431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:39.221740   16431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:39.222278   16431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:39.223944   16431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:39:39.228259  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:39:39.228272  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:39:39.295712  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:39:39.295731  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:39:39.326861  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:39:39.326879  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:39:41.894749  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:39:41.904730  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:39:41.904791  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:39:41.929481  530956 cri.go:89] found id: ""
	I1212 00:39:41.929494  530956 logs.go:282] 0 containers: []
	W1212 00:39:41.929501  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:39:41.929506  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:39:41.929564  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:39:41.956371  530956 cri.go:89] found id: ""
	I1212 00:39:41.956385  530956 logs.go:282] 0 containers: []
	W1212 00:39:41.956392  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:39:41.956397  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:39:41.956453  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:39:41.998298  530956 cri.go:89] found id: ""
	I1212 00:39:41.998313  530956 logs.go:282] 0 containers: []
	W1212 00:39:41.998327  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:39:41.998332  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:39:41.998394  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:39:42.039528  530956 cri.go:89] found id: ""
	I1212 00:39:42.039542  530956 logs.go:282] 0 containers: []
	W1212 00:39:42.039549  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:39:42.039554  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:39:42.039617  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:39:42.071895  530956 cri.go:89] found id: ""
	I1212 00:39:42.071909  530956 logs.go:282] 0 containers: []
	W1212 00:39:42.071918  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:39:42.071923  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:39:42.071999  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:39:42.104807  530956 cri.go:89] found id: ""
	I1212 00:39:42.104823  530956 logs.go:282] 0 containers: []
	W1212 00:39:42.104831  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:39:42.104837  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:39:42.104914  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:39:42.139871  530956 cri.go:89] found id: ""
	I1212 00:39:42.139886  530956 logs.go:282] 0 containers: []
	W1212 00:39:42.139894  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:39:42.139903  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:39:42.139917  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:39:42.221872  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:39:42.211579   16534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:42.212551   16534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:42.214428   16534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:42.215573   16534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:42.216630   16534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:39:42.211579   16534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:42.212551   16534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:42.214428   16534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:42.215573   16534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:42.216630   16534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:39:42.221883  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:39:42.221894  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:39:42.294247  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:39:42.294267  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:39:42.327229  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:39:42.327245  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:39:42.396289  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:39:42.396308  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:39:44.911333  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:39:44.921559  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:39:44.921618  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:39:44.947811  530956 cri.go:89] found id: ""
	I1212 00:39:44.947825  530956 logs.go:282] 0 containers: []
	W1212 00:39:44.947832  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:39:44.947837  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:39:44.947898  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:39:44.974488  530956 cri.go:89] found id: ""
	I1212 00:39:44.974502  530956 logs.go:282] 0 containers: []
	W1212 00:39:44.974509  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:39:44.974514  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:39:44.974578  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:39:45.062335  530956 cri.go:89] found id: ""
	I1212 00:39:45.062350  530956 logs.go:282] 0 containers: []
	W1212 00:39:45.062358  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:39:45.062363  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:39:45.062431  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:39:45.115594  530956 cri.go:89] found id: ""
	I1212 00:39:45.115611  530956 logs.go:282] 0 containers: []
	W1212 00:39:45.115621  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:39:45.115627  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:39:45.115695  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:39:45.157432  530956 cri.go:89] found id: ""
	I1212 00:39:45.157449  530956 logs.go:282] 0 containers: []
	W1212 00:39:45.157457  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:39:45.157463  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:39:45.157542  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:39:45.199222  530956 cri.go:89] found id: ""
	I1212 00:39:45.199237  530956 logs.go:282] 0 containers: []
	W1212 00:39:45.199247  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:39:45.199252  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:39:45.199327  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:39:45.277211  530956 cri.go:89] found id: ""
	I1212 00:39:45.277239  530956 logs.go:282] 0 containers: []
	W1212 00:39:45.277248  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:39:45.277256  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:39:45.277272  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:39:45.354665  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:39:45.354742  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:39:45.370015  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:39:45.370032  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:39:45.437294  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:39:45.428349   16647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:45.429025   16647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:45.430763   16647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:45.431362   16647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:45.433211   16647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:39:45.428349   16647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:45.429025   16647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:45.430763   16647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:45.431362   16647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:45.433211   16647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:39:45.437306  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:39:45.437317  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:39:45.506731  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:39:45.506752  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:39:48.035477  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:39:48.045681  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:39:48.045741  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:39:48.076045  530956 cri.go:89] found id: ""
	I1212 00:39:48.076059  530956 logs.go:282] 0 containers: []
	W1212 00:39:48.076066  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:39:48.076072  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:39:48.076135  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:39:48.110061  530956 cri.go:89] found id: ""
	I1212 00:39:48.110074  530956 logs.go:282] 0 containers: []
	W1212 00:39:48.110082  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:39:48.110087  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:39:48.110146  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:39:48.134924  530956 cri.go:89] found id: ""
	I1212 00:39:48.134939  530956 logs.go:282] 0 containers: []
	W1212 00:39:48.134946  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:39:48.134951  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:39:48.135014  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:39:48.160105  530956 cri.go:89] found id: ""
	I1212 00:39:48.160119  530956 logs.go:282] 0 containers: []
	W1212 00:39:48.160126  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:39:48.160131  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:39:48.160199  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:39:48.185148  530956 cri.go:89] found id: ""
	I1212 00:39:48.185162  530956 logs.go:282] 0 containers: []
	W1212 00:39:48.185169  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:39:48.185174  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:39:48.185236  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:39:48.210105  530956 cri.go:89] found id: ""
	I1212 00:39:48.210119  530956 logs.go:282] 0 containers: []
	W1212 00:39:48.210127  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:39:48.210132  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:39:48.210198  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:39:48.234723  530956 cri.go:89] found id: ""
	I1212 00:39:48.234736  530956 logs.go:282] 0 containers: []
	W1212 00:39:48.234743  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:39:48.234752  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:39:48.234762  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:39:48.264606  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:39:48.264624  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:39:48.333093  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:39:48.333111  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:39:48.348065  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:39:48.348080  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:39:48.410868  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:39:48.402563   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:48.403327   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:48.405002   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:48.405468   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:48.407068   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:39:48.402563   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:48.403327   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:48.405002   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:48.405468   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:48.407068   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:39:48.410879  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:39:48.410891  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:39:50.982598  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:39:50.995299  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:39:50.995361  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:39:51.032326  530956 cri.go:89] found id: ""
	I1212 00:39:51.032340  530956 logs.go:282] 0 containers: []
	W1212 00:39:51.032348  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:39:51.032353  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:39:51.032412  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:39:51.060416  530956 cri.go:89] found id: ""
	I1212 00:39:51.060435  530956 logs.go:282] 0 containers: []
	W1212 00:39:51.060444  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:39:51.060448  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:39:51.060525  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:39:51.087755  530956 cri.go:89] found id: ""
	I1212 00:39:51.087769  530956 logs.go:282] 0 containers: []
	W1212 00:39:51.087777  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:39:51.087783  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:39:51.087844  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:39:51.113932  530956 cri.go:89] found id: ""
	I1212 00:39:51.113946  530956 logs.go:282] 0 containers: []
	W1212 00:39:51.113954  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:39:51.113959  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:39:51.114017  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:39:51.141585  530956 cri.go:89] found id: ""
	I1212 00:39:51.141599  530956 logs.go:282] 0 containers: []
	W1212 00:39:51.141607  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:39:51.141612  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:39:51.141678  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:39:51.169491  530956 cri.go:89] found id: ""
	I1212 00:39:51.169506  530956 logs.go:282] 0 containers: []
	W1212 00:39:51.169513  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:39:51.169518  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:39:51.169577  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:39:51.195655  530956 cri.go:89] found id: ""
	I1212 00:39:51.195668  530956 logs.go:282] 0 containers: []
	W1212 00:39:51.195676  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:39:51.195684  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:39:51.195694  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:39:51.264764  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:39:51.264785  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:39:51.291612  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:39:51.291628  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:39:51.359746  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:39:51.359764  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:39:51.374319  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:39:51.374340  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:39:51.437078  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:39:51.428471   16867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:51.429034   16867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:51.430488   16867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:51.431070   16867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:51.432638   16867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:39:51.428471   16867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:51.429034   16867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:51.430488   16867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:51.431070   16867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:51.432638   16867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:39:53.938110  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:39:53.948663  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:39:53.948763  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:39:53.987477  530956 cri.go:89] found id: ""
	I1212 00:39:53.987490  530956 logs.go:282] 0 containers: []
	W1212 00:39:53.987497  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:39:53.987502  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:39:53.987565  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:39:54.026859  530956 cri.go:89] found id: ""
	I1212 00:39:54.026873  530956 logs.go:282] 0 containers: []
	W1212 00:39:54.026881  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:39:54.026897  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:39:54.026958  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:39:54.054638  530956 cri.go:89] found id: ""
	I1212 00:39:54.054652  530956 logs.go:282] 0 containers: []
	W1212 00:39:54.054659  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:39:54.054664  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:39:54.054820  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:39:54.080864  530956 cri.go:89] found id: ""
	I1212 00:39:54.080879  530956 logs.go:282] 0 containers: []
	W1212 00:39:54.080886  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:39:54.080891  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:39:54.080958  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:39:54.106972  530956 cri.go:89] found id: ""
	I1212 00:39:54.106986  530956 logs.go:282] 0 containers: []
	W1212 00:39:54.106993  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:39:54.106998  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:39:54.107056  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:39:54.131665  530956 cri.go:89] found id: ""
	I1212 00:39:54.131678  530956 logs.go:282] 0 containers: []
	W1212 00:39:54.131686  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:39:54.131692  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:39:54.131749  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:39:54.155857  530956 cri.go:89] found id: ""
	I1212 00:39:54.155870  530956 logs.go:282] 0 containers: []
	W1212 00:39:54.155877  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:39:54.155885  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:39:54.155895  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:39:54.225662  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:39:54.216735   16954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:54.217433   16954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:54.219268   16954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:54.219827   16954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:54.221703   16954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:39:54.216735   16954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:54.217433   16954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:54.219268   16954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:54.219827   16954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:54.221703   16954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:39:54.225675  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:39:54.225686  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:39:54.297964  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:39:54.297992  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:39:54.330016  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:39:54.330041  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:39:54.401820  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:39:54.401842  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:39:56.918391  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:39:56.929720  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:39:56.929780  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:39:56.955459  530956 cri.go:89] found id: ""
	I1212 00:39:56.955473  530956 logs.go:282] 0 containers: []
	W1212 00:39:56.955480  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:39:56.955485  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:39:56.955543  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:39:56.987918  530956 cri.go:89] found id: ""
	I1212 00:39:56.987932  530956 logs.go:282] 0 containers: []
	W1212 00:39:56.987939  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:39:56.987944  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:39:56.988002  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:39:57.020006  530956 cri.go:89] found id: ""
	I1212 00:39:57.020020  530956 logs.go:282] 0 containers: []
	W1212 00:39:57.020033  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:39:57.020038  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:39:57.020115  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:39:57.048442  530956 cri.go:89] found id: ""
	I1212 00:39:57.048467  530956 logs.go:282] 0 containers: []
	W1212 00:39:57.048475  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:39:57.048483  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:39:57.048552  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:39:57.074435  530956 cri.go:89] found id: ""
	I1212 00:39:57.074449  530956 logs.go:282] 0 containers: []
	W1212 00:39:57.074456  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:39:57.074461  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:39:57.074521  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:39:57.099293  530956 cri.go:89] found id: ""
	I1212 00:39:57.099307  530956 logs.go:282] 0 containers: []
	W1212 00:39:57.099315  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:39:57.099320  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:39:57.099379  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:39:57.125629  530956 cri.go:89] found id: ""
	I1212 00:39:57.125651  530956 logs.go:282] 0 containers: []
	W1212 00:39:57.125659  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:39:57.125666  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:39:57.125676  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:39:57.155351  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:39:57.155367  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:39:57.220025  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:39:57.220044  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:39:57.234981  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:39:57.235003  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:39:57.300835  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:39:57.292962   17075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:57.293535   17075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:57.295085   17075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:57.295551   17075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:57.297012   17075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:39:57.292962   17075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:57.293535   17075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:57.295085   17075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:57.295551   17075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:57.297012   17075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:39:57.300845  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:39:57.300856  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:39:59.869530  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:39:59.882048  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:39:59.882110  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:39:59.907681  530956 cri.go:89] found id: ""
	I1212 00:39:59.907696  530956 logs.go:282] 0 containers: []
	W1212 00:39:59.907703  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:39:59.907708  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:39:59.907775  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:39:59.932480  530956 cri.go:89] found id: ""
	I1212 00:39:59.932494  530956 logs.go:282] 0 containers: []
	W1212 00:39:59.932509  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:39:59.932515  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:39:59.932583  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:39:59.958173  530956 cri.go:89] found id: ""
	I1212 00:39:59.958188  530956 logs.go:282] 0 containers: []
	W1212 00:39:59.958195  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:39:59.958200  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:39:59.958261  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:39:59.990305  530956 cri.go:89] found id: ""
	I1212 00:39:59.990319  530956 logs.go:282] 0 containers: []
	W1212 00:39:59.990326  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:39:59.990331  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:39:59.990390  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:40:00.115674  530956 cri.go:89] found id: ""
	I1212 00:40:00.115690  530956 logs.go:282] 0 containers: []
	W1212 00:40:00.115699  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:40:00.115705  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:40:00.115778  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:40:00.211546  530956 cri.go:89] found id: ""
	I1212 00:40:00.211573  530956 logs.go:282] 0 containers: []
	W1212 00:40:00.211583  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:40:00.211589  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:40:00.211670  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:40:00.306176  530956 cri.go:89] found id: ""
	I1212 00:40:00.306192  530956 logs.go:282] 0 containers: []
	W1212 00:40:00.306200  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:40:00.306208  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:40:00.306220  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:40:00.433331  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:40:00.433360  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:40:00.458175  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:40:00.458193  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:40:00.603203  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:40:00.592976   17170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:40:00.593818   17170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:40:00.596110   17170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:40:00.596864   17170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:40:00.598662   17170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:40:00.592976   17170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:40:00.593818   17170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:40:00.596110   17170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:40:00.596864   17170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:40:00.598662   17170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:40:00.603213  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:40:00.603224  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:40:00.674062  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:40:00.674086  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:40:03.207059  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:40:03.217144  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:40:03.217203  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:40:03.242377  530956 cri.go:89] found id: ""
	I1212 00:40:03.242391  530956 logs.go:282] 0 containers: []
	W1212 00:40:03.242398  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:40:03.242403  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:40:03.242460  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:40:03.268604  530956 cri.go:89] found id: ""
	I1212 00:40:03.268618  530956 logs.go:282] 0 containers: []
	W1212 00:40:03.268625  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:40:03.268630  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:40:03.268691  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:40:03.293354  530956 cri.go:89] found id: ""
	I1212 00:40:03.293367  530956 logs.go:282] 0 containers: []
	W1212 00:40:03.293374  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:40:03.293379  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:40:03.293437  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:40:03.323082  530956 cri.go:89] found id: ""
	I1212 00:40:03.323095  530956 logs.go:282] 0 containers: []
	W1212 00:40:03.323102  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:40:03.323108  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:40:03.323165  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:40:03.348118  530956 cri.go:89] found id: ""
	I1212 00:40:03.348132  530956 logs.go:282] 0 containers: []
	W1212 00:40:03.348138  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:40:03.348144  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:40:03.348203  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:40:03.375333  530956 cri.go:89] found id: ""
	I1212 00:40:03.375346  530956 logs.go:282] 0 containers: []
	W1212 00:40:03.375353  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:40:03.375358  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:40:03.375418  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:40:03.401835  530956 cri.go:89] found id: ""
	I1212 00:40:03.401850  530956 logs.go:282] 0 containers: []
	W1212 00:40:03.401857  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:40:03.401864  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:40:03.401882  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:40:03.467887  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:40:03.459632   17271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:40:03.460370   17271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:40:03.461940   17271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:40:03.462285   17271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:40:03.463794   17271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:40:03.459632   17271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:40:03.460370   17271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:40:03.461940   17271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:40:03.462285   17271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:40:03.463794   17271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:40:03.467897  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:40:03.467907  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:40:03.536174  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:40:03.536194  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:40:03.564970  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:40:03.564985  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:40:03.632350  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:40:03.632369  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:40:06.147945  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:40:06.157971  530956 kubeadm.go:602] duration metric: took 4m2.720434125s to restartPrimaryControlPlane
	W1212 00:40:06.158027  530956 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1212 00:40:06.158103  530956 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1212 00:40:06.569482  530956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:40:06.582591  530956 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 00:40:06.590536  530956 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 00:40:06.590592  530956 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 00:40:06.598618  530956 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 00:40:06.598629  530956 kubeadm.go:158] found existing configuration files:
	
	I1212 00:40:06.598698  530956 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1212 00:40:06.606769  530956 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 00:40:06.606840  530956 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 00:40:06.614547  530956 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1212 00:40:06.622660  530956 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 00:40:06.622739  530956 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 00:40:06.630003  530956 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1212 00:40:06.638125  530956 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 00:40:06.638179  530956 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 00:40:06.645410  530956 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1212 00:40:06.652882  530956 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 00:40:06.652943  530956 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 00:40:06.660446  530956 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 00:40:06.700514  530956 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 00:40:06.700561  530956 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 00:40:06.776561  530956 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 00:40:06.776625  530956 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1212 00:40:06.776659  530956 kubeadm.go:319] OS: Linux
	I1212 00:40:06.776702  530956 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 00:40:06.776749  530956 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 00:40:06.776795  530956 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 00:40:06.776842  530956 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 00:40:06.776889  530956 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 00:40:06.776936  530956 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 00:40:06.776980  530956 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 00:40:06.777026  530956 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 00:40:06.777077  530956 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 00:40:06.848361  530956 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 00:40:06.848476  530956 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 00:40:06.848571  530956 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 00:40:06.858454  530956 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 00:40:06.861922  530956 out.go:252]   - Generating certificates and keys ...
	I1212 00:40:06.862039  530956 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 00:40:06.862113  530956 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 00:40:06.862184  530956 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 00:40:06.862240  530956 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1212 00:40:06.862305  530956 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 00:40:06.862362  530956 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1212 00:40:06.862420  530956 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1212 00:40:06.862477  530956 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1212 00:40:06.862546  530956 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 00:40:06.862613  530956 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 00:40:06.862665  530956 kubeadm.go:319] [certs] Using the existing "sa" key
	I1212 00:40:06.862736  530956 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 00:40:07.126544  530956 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 00:40:07.166854  530956 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 00:40:07.523509  530956 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 00:40:07.692785  530956 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 00:40:07.825726  530956 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 00:40:07.826395  530956 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 00:40:07.830778  530956 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 00:40:07.833963  530956 out.go:252]   - Booting up control plane ...
	I1212 00:40:07.834090  530956 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 00:40:07.834172  530956 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 00:40:07.835198  530956 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 00:40:07.850333  530956 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 00:40:07.850580  530956 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 00:40:07.857863  530956 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 00:40:07.858096  530956 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 00:40:07.858271  530956 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 00:40:07.986589  530956 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 00:40:07.986752  530956 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 00:44:07.988367  530956 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001882345s
	I1212 00:44:07.988392  530956 kubeadm.go:319] 
	I1212 00:44:07.988471  530956 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 00:44:07.988504  530956 kubeadm.go:319] 	- The kubelet is not running
	I1212 00:44:07.988626  530956 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 00:44:07.988630  530956 kubeadm.go:319] 
	I1212 00:44:07.988743  530956 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 00:44:07.988774  530956 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 00:44:07.988810  530956 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 00:44:07.988814  530956 kubeadm.go:319] 
	I1212 00:44:07.993727  530956 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1212 00:44:07.994213  530956 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1212 00:44:07.994355  530956 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 00:44:07.994630  530956 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1212 00:44:07.994638  530956 kubeadm.go:319] 
	I1212 00:44:07.994738  530956 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1212 00:44:07.994866  530956 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001882345s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1212 00:44:07.994955  530956 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1212 00:44:08.418732  530956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:44:08.431583  530956 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 00:44:08.431639  530956 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 00:44:08.439724  530956 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 00:44:08.439733  530956 kubeadm.go:158] found existing configuration files:
	
	I1212 00:44:08.439785  530956 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1212 00:44:08.447652  530956 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 00:44:08.447708  530956 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 00:44:08.454853  530956 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1212 00:44:08.462499  530956 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 00:44:08.462562  530956 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 00:44:08.470106  530956 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1212 00:44:08.477811  530956 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 00:44:08.477868  530956 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 00:44:08.485348  530956 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1212 00:44:08.493142  530956 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 00:44:08.493207  530956 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 00:44:08.501010  530956 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 00:44:08.619087  530956 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1212 00:44:08.619550  530956 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1212 00:44:08.685435  530956 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 00:48:10.247562  530956 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1212 00:48:10.247592  530956 kubeadm.go:319] 
	I1212 00:48:10.247688  530956 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1212 00:48:10.252292  530956 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 00:48:10.252346  530956 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 00:48:10.252445  530956 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 00:48:10.252500  530956 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1212 00:48:10.252533  530956 kubeadm.go:319] OS: Linux
	I1212 00:48:10.252577  530956 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 00:48:10.252624  530956 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 00:48:10.252670  530956 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 00:48:10.252716  530956 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 00:48:10.252768  530956 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 00:48:10.252816  530956 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 00:48:10.252859  530956 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 00:48:10.252906  530956 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 00:48:10.252951  530956 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 00:48:10.253023  530956 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 00:48:10.253117  530956 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 00:48:10.253205  530956 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 00:48:10.253277  530956 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 00:48:10.256411  530956 out.go:252]   - Generating certificates and keys ...
	I1212 00:48:10.256515  530956 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 00:48:10.256580  530956 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 00:48:10.256656  530956 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 00:48:10.256724  530956 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1212 00:48:10.256818  530956 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 00:48:10.256878  530956 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1212 00:48:10.256941  530956 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1212 00:48:10.257008  530956 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1212 00:48:10.257086  530956 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 00:48:10.257157  530956 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 00:48:10.257195  530956 kubeadm.go:319] [certs] Using the existing "sa" key
	I1212 00:48:10.257249  530956 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 00:48:10.257299  530956 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 00:48:10.257355  530956 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 00:48:10.257407  530956 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 00:48:10.257469  530956 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 00:48:10.257524  530956 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 00:48:10.257609  530956 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 00:48:10.257674  530956 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 00:48:10.260574  530956 out.go:252]   - Booting up control plane ...
	I1212 00:48:10.260690  530956 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 00:48:10.260801  530956 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 00:48:10.260876  530956 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 00:48:10.260981  530956 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 00:48:10.261102  530956 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 00:48:10.261235  530956 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 00:48:10.261332  530956 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 00:48:10.261377  530956 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 00:48:10.261506  530956 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 00:48:10.261614  530956 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 00:48:10.261707  530956 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000091689s
	I1212 00:48:10.261721  530956 kubeadm.go:319] 
	I1212 00:48:10.261778  530956 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 00:48:10.261809  530956 kubeadm.go:319] 	- The kubelet is not running
	I1212 00:48:10.261921  530956 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 00:48:10.261925  530956 kubeadm.go:319] 
	I1212 00:48:10.262045  530956 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 00:48:10.262083  530956 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 00:48:10.262112  530956 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 00:48:10.262133  530956 kubeadm.go:319] 
	I1212 00:48:10.262182  530956 kubeadm.go:403] duration metric: took 12m6.858628348s to StartCluster
	I1212 00:48:10.262232  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:48:10.262300  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:48:10.289138  530956 cri.go:89] found id: ""
	I1212 00:48:10.289156  530956 logs.go:282] 0 containers: []
	W1212 00:48:10.289163  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:48:10.289168  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:48:10.289230  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:48:10.317667  530956 cri.go:89] found id: ""
	I1212 00:48:10.317681  530956 logs.go:282] 0 containers: []
	W1212 00:48:10.317689  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:48:10.317694  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:48:10.317758  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:48:10.347070  530956 cri.go:89] found id: ""
	I1212 00:48:10.347083  530956 logs.go:282] 0 containers: []
	W1212 00:48:10.347091  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:48:10.347096  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:48:10.347155  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:48:10.373637  530956 cri.go:89] found id: ""
	I1212 00:48:10.373650  530956 logs.go:282] 0 containers: []
	W1212 00:48:10.373658  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:48:10.373663  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:48:10.373722  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:48:10.401060  530956 cri.go:89] found id: ""
	I1212 00:48:10.401074  530956 logs.go:282] 0 containers: []
	W1212 00:48:10.401081  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:48:10.401086  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:48:10.401146  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:48:10.426271  530956 cri.go:89] found id: ""
	I1212 00:48:10.426296  530956 logs.go:282] 0 containers: []
	W1212 00:48:10.426303  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:48:10.426309  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:48:10.426375  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:48:10.451340  530956 cri.go:89] found id: ""
	I1212 00:48:10.451354  530956 logs.go:282] 0 containers: []
	W1212 00:48:10.451361  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:48:10.451369  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:48:10.451379  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:48:10.526222  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:48:10.526241  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:48:10.557574  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:48:10.557591  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:48:10.627641  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:48:10.627659  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:48:10.642797  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:48:10.642812  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:48:10.704719  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:48:10.695841   21093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:48:10.696445   21093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:48:10.698250   21093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:48:10.698875   21093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:48:10.700463   21093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:48:10.695841   21093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:48:10.696445   21093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:48:10.698250   21093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:48:10.698875   21093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:48:10.700463   21093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	W1212 00:48:10.704732  530956 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000091689s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1212 00:48:10.704782  530956 out.go:285] * 
	W1212 00:48:10.704838  530956 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000091689s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 00:48:10.704854  530956 out.go:285] * 
	W1212 00:48:10.706992  530956 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 00:48:10.711878  530956 out.go:203] 
	W1212 00:48:10.714724  530956 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000091689s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 00:48:10.714773  530956 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1212 00:48:10.714793  530956 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1212 00:48:10.717973  530956 out.go:203] 
	
	
	==> CRI-O <==
	Dec 12 00:36:02 functional-035643 crio[9906]: time="2025-12-12T00:36:02.167671353Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 12 00:36:02 functional-035643 crio[9906]: time="2025-12-12T00:36:02.167708004Z" level=info msg="Starting seccomp notifier watcher"
	Dec 12 00:36:02 functional-035643 crio[9906]: time="2025-12-12T00:36:02.167752311Z" level=info msg="Create NRI interface"
	Dec 12 00:36:02 functional-035643 crio[9906]: time="2025-12-12T00:36:02.167872177Z" level=info msg="built-in NRI default validator is disabled"
	Dec 12 00:36:02 functional-035643 crio[9906]: time="2025-12-12T00:36:02.167882466Z" level=info msg="runtime interface created"
	Dec 12 00:36:02 functional-035643 crio[9906]: time="2025-12-12T00:36:02.167904357Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 12 00:36:02 functional-035643 crio[9906]: time="2025-12-12T00:36:02.167910363Z" level=info msg="runtime interface starting up..."
	Dec 12 00:36:02 functional-035643 crio[9906]: time="2025-12-12T00:36:02.167919183Z" level=info msg="starting plugins..."
	Dec 12 00:36:02 functional-035643 crio[9906]: time="2025-12-12T00:36:02.167936889Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 12 00:36:02 functional-035643 crio[9906]: time="2025-12-12T00:36:02.168004375Z" level=info msg="No systemd watchdog enabled"
	Dec 12 00:36:02 functional-035643 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 12 00:40:06 functional-035643 crio[9906]: time="2025-12-12T00:40:06.854632207Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=640da022-2edf-494c-a660-79e3ab919eba name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:40:06 functional-035643 crio[9906]: time="2025-12-12T00:40:06.855342483Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=673ecd0d-a1ac-45d5-bb90-3e1f04cdc90f name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:40:06 functional-035643 crio[9906]: time="2025-12-12T00:40:06.855810714Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=c57f39b7-fb58-4f67-bde4-1b55c2187b3f name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:40:06 functional-035643 crio[9906]: time="2025-12-12T00:40:06.856291532Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=a97cf7ab-fcf0-4971-8a79-d2c53b6e4ee5 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:40:06 functional-035643 crio[9906]: time="2025-12-12T00:40:06.856721905Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=01bb9bbf-51cf-478f-81f3-99ec7edffcf4 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:40:06 functional-035643 crio[9906]: time="2025-12-12T00:40:06.857120764Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=272d9706-6818-4f2e-bd33-95134bf8fb23 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:40:06 functional-035643 crio[9906]: time="2025-12-12T00:40:06.857524931Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=50b82acc-740c-444d-8ec5-a3c84ad4b6d2 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:44:08 functional-035643 crio[9906]: time="2025-12-12T00:44:08.688638675Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=4b143437-6a5c-4f02-b714-2d1bb8cb5a7a name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:44:08 functional-035643 crio[9906]: time="2025-12-12T00:44:08.689301235Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=54834de4-a19b-47fe-b478-123a3a9a03c9 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:44:08 functional-035643 crio[9906]: time="2025-12-12T00:44:08.689852011Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=a783bb1e-a84b-4d8e-b3d6-349f1b7407cf name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:44:08 functional-035643 crio[9906]: time="2025-12-12T00:44:08.690318153Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=d1830fa6-0c29-40cb-a67f-5512d68b4fbf name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:44:08 functional-035643 crio[9906]: time="2025-12-12T00:44:08.691052318Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=28af2ec8-e6ac-48d5-8255-6af4687f21e8 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:44:08 functional-035643 crio[9906]: time="2025-12-12T00:44:08.691575Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=e0b0d4f6-b48c-4765-bc0e-dcfd4d36d892 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:44:08 functional-035643 crio[9906]: time="2025-12-12T00:44:08.69204275Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=5eedd5b1-6dbb-4fbd-8ae6-426f470f128b name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:48:11.905940   21201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:48:11.906333   21201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:48:11.907984   21201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:48:11.908438   21201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:48:11.909849   21201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec11 23:45] hrtimer: interrupt took 13740716 ns
	[Dec12 00:10] kauditd_printk_skb: 8 callbacks suppressed
	[Dec12 00:11] overlayfs: idmapped layers are currently not supported
	[  +0.124336] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec12 00:17] overlayfs: idmapped layers are currently not supported
	[Dec12 00:18] overlayfs: idmapped layers are currently not supported
	[Dec12 00:35] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 00:48:11 up  3:30,  0 user,  load average: 0.04, 0.18, 0.46
	Linux functional-035643 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 12 00:48:09 functional-035643 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 00:48:09 functional-035643 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 960.
	Dec 12 00:48:09 functional-035643 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:48:09 functional-035643 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:48:10 functional-035643 kubelet[21009]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 00:48:10 functional-035643 kubelet[21009]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 00:48:10 functional-035643 kubelet[21009]: E1212 00:48:10.016030   21009 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 00:48:10 functional-035643 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 00:48:10 functional-035643 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 00:48:10 functional-035643 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 961.
	Dec 12 00:48:10 functional-035643 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:48:10 functional-035643 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:48:10 functional-035643 kubelet[21098]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 00:48:10 functional-035643 kubelet[21098]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 00:48:10 functional-035643 kubelet[21098]: E1212 00:48:10.796506   21098 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 00:48:10 functional-035643 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 00:48:10 functional-035643 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 00:48:11 functional-035643 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 962.
	Dec 12 00:48:11 functional-035643 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:48:11 functional-035643 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:48:11 functional-035643 kubelet[21119]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 00:48:11 functional-035643 kubelet[21119]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 00:48:11 functional-035643 kubelet[21119]: E1212 00:48:11.530594   21119 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 00:48:11 functional-035643 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 00:48:11 functional-035643 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-035643 -n functional-035643
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-035643 -n functional-035643: exit status 2 (333.778695ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-035643" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (734.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (2.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-035643 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: (dbg) Non-zero exit: kubectl --context functional-035643 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (64.332909ms)

                                                
                                                
** stderr ** 
	E1212 00:48:12.913608  543015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1212 00:48:12.915103  543015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1212 00:48:12.916570  543015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1212 00:48:12.918030  543015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1212 00:48:12.919454  543015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:827: failed to get components. args "kubectl --context functional-035643 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-035643
helpers_test.go:244: (dbg) docker inspect functional-035643:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "02b8c8e636a5f9c9930bd279101be257c50eb00805c3c8fd0285e10206ff115a",
	        "Created": "2025-12-12T00:21:16.539894649Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 519641,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T00:21:16.600605162Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/02b8c8e636a5f9c9930bd279101be257c50eb00805c3c8fd0285e10206ff115a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/02b8c8e636a5f9c9930bd279101be257c50eb00805c3c8fd0285e10206ff115a/hostname",
	        "HostsPath": "/var/lib/docker/containers/02b8c8e636a5f9c9930bd279101be257c50eb00805c3c8fd0285e10206ff115a/hosts",
	        "LogPath": "/var/lib/docker/containers/02b8c8e636a5f9c9930bd279101be257c50eb00805c3c8fd0285e10206ff115a/02b8c8e636a5f9c9930bd279101be257c50eb00805c3c8fd0285e10206ff115a-json.log",
	        "Name": "/functional-035643",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-035643:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-035643",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "02b8c8e636a5f9c9930bd279101be257c50eb00805c3c8fd0285e10206ff115a",
	                "LowerDir": "/var/lib/docker/overlay2/fac4f7ad025123b29aaa4f718217dd9d72b542fc9949b8afc2ffc326afc38542-init/diff:/var/lib/docker/overlay2/312acdcca8c5c90ada236fa0dd866f841348e5b8485928af37d3628cccc20197/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fac4f7ad025123b29aaa4f718217dd9d72b542fc9949b8afc2ffc326afc38542/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fac4f7ad025123b29aaa4f718217dd9d72b542fc9949b8afc2ffc326afc38542/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fac4f7ad025123b29aaa4f718217dd9d72b542fc9949b8afc2ffc326afc38542/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-035643",
	                "Source": "/var/lib/docker/volumes/functional-035643/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-035643",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-035643",
	                "name.minikube.sigs.k8s.io": "functional-035643",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ede6a17442d6bf83b8f4c9f93f252345cec3d0406f82de2d6bd2cfd4713e2163",
	            "SandboxKey": "/var/run/docker/netns/ede6a17442d6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33183"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33184"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33187"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33185"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33186"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-035643": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "52:d5:12:89:ea:40",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ad01995b183fdebead6c725e2b942ae8ce2d3964b3552789fe5b50ee7e7239a3",
	                    "EndpointID": "d429a1cd0f840d042af4ad7ea0bda6067a342be7fb552083411004a3604b0124",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-035643",
	                        "02b8c8e636a5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-035643 -n functional-035643
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-035643 -n functional-035643: exit status 2 (328.079038ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-035643 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-921447 image ls --format short --alsologtostderr                                                                                       │ functional-921447 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ image   │ functional-921447 image ls --format json --alsologtostderr                                                                                        │ functional-921447 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ ssh     │ functional-921447 ssh pgrep buildkitd                                                                                                             │ functional-921447 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │                     │
	│ image   │ functional-921447 image ls --format table --alsologtostderr                                                                                       │ functional-921447 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ image   │ functional-921447 image build -t localhost/my-image:functional-921447 testdata/build --alsologtostderr                                            │ functional-921447 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ image   │ functional-921447 image ls                                                                                                                        │ functional-921447 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ delete  │ -p functional-921447                                                                                                                              │ functional-921447 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ start   │ -p functional-035643 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │                     │
	│ start   │ -p functional-035643 --alsologtostderr -v=8                                                                                                       │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:29 UTC │                     │
	│ cache   │ functional-035643 cache add registry.k8s.io/pause:3.1                                                                                             │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ cache   │ functional-035643 cache add registry.k8s.io/pause:3.3                                                                                             │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ cache   │ functional-035643 cache add registry.k8s.io/pause:latest                                                                                          │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ cache   │ functional-035643 cache add minikube-local-cache-test:functional-035643                                                                           │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ cache   │ functional-035643 cache delete minikube-local-cache-test:functional-035643                                                                        │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ cache   │ list                                                                                                                                              │ minikube          │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ ssh     │ functional-035643 ssh sudo crictl images                                                                                                          │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ ssh     │ functional-035643 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ ssh     │ functional-035643 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │                     │
	│ cache   │ functional-035643 cache reload                                                                                                                    │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ ssh     │ functional-035643 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                               │ minikube          │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ kubectl │ functional-035643 kubectl -- --context functional-035643 get pods                                                                                 │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │                     │
	│ start   │ -p functional-035643 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                          │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 00:35:58
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 00:35:58.676999  530956 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:35:58.677109  530956 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:35:58.677113  530956 out.go:374] Setting ErrFile to fd 2...
	I1212 00:35:58.677117  530956 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:35:58.677347  530956 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-487723/.minikube/bin
	I1212 00:35:58.677686  530956 out.go:368] Setting JSON to false
	I1212 00:35:58.678525  530956 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":11904,"bootTime":1765487855,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1212 00:35:58.678585  530956 start.go:143] virtualization:  
	I1212 00:35:58.682116  530956 out.go:179] * [functional-035643] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 00:35:58.686138  530956 out.go:179]   - MINIKUBE_LOCATION=22101
	I1212 00:35:58.686257  530956 notify.go:221] Checking for updates...
	I1212 00:35:58.691862  530956 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:35:58.694918  530956 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22101-487723/kubeconfig
	I1212 00:35:58.697806  530956 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-487723/.minikube
	I1212 00:35:58.700662  530956 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 00:35:58.703472  530956 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:35:58.706890  530956 config.go:182] Loaded profile config "functional-035643": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 00:35:58.706982  530956 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 00:35:58.735768  530956 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 00:35:58.735882  530956 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:35:58.786774  530956 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-12 00:35:58.777518712 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 00:35:58.786886  530956 docker.go:319] overlay module found
	I1212 00:35:58.790016  530956 out.go:179] * Using the docker driver based on existing profile
	I1212 00:35:58.792828  530956 start.go:309] selected driver: docker
	I1212 00:35:58.792840  530956 start.go:927] validating driver "docker" against &{Name:functional-035643 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-035643 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:35:58.792956  530956 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:35:58.793078  530956 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:35:58.848144  530956 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-12 00:35:58.839160729 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 00:35:58.848551  530956 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 00:35:58.848575  530956 cni.go:84] Creating CNI manager for ""
	I1212 00:35:58.848625  530956 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 00:35:58.848666  530956 start.go:353] cluster config:
	{Name:functional-035643 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-035643 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:35:58.851767  530956 out.go:179] * Starting "functional-035643" primary control-plane node in "functional-035643" cluster
	I1212 00:35:58.854549  530956 cache.go:134] Beginning downloading kic base image for docker with crio
	I1212 00:35:58.857426  530956 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1212 00:35:58.860284  530956 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1212 00:35:58.860323  530956 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22101-487723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1212 00:35:58.860332  530956 cache.go:65] Caching tarball of preloaded images
	I1212 00:35:58.860357  530956 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1212 00:35:58.860418  530956 preload.go:238] Found /home/jenkins/minikube-integration/22101-487723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1212 00:35:58.860426  530956 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1212 00:35:58.860536  530956 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/config.json ...
	I1212 00:35:58.879785  530956 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1212 00:35:58.879795  530956 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1212 00:35:58.879813  530956 cache.go:243] Successfully downloaded all kic artifacts
	I1212 00:35:58.879843  530956 start.go:360] acquireMachinesLock for functional-035643: {Name:mkb0cdc7d354412594dc63c0234fde00134e758d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 00:35:58.879904  530956 start.go:364] duration metric: took 45.603µs to acquireMachinesLock for "functional-035643"
	I1212 00:35:58.879924  530956 start.go:96] Skipping create...Using existing machine configuration
	I1212 00:35:58.879928  530956 fix.go:54] fixHost starting: 
	I1212 00:35:58.880192  530956 cli_runner.go:164] Run: docker container inspect functional-035643 --format={{.State.Status}}
	I1212 00:35:58.897119  530956 fix.go:112] recreateIfNeeded on functional-035643: state=Running err=<nil>
	W1212 00:35:58.897146  530956 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 00:35:58.900349  530956 out.go:252] * Updating the running docker "functional-035643" container ...
	I1212 00:35:58.900378  530956 machine.go:94] provisionDockerMachine start ...
	I1212 00:35:58.900465  530956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
	I1212 00:35:58.917663  530956 main.go:143] libmachine: Using SSH client type: native
	I1212 00:35:58.917980  530956 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1212 00:35:58.917985  530956 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 00:35:59.082110  530956 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-035643
	
	I1212 00:35:59.082124  530956 ubuntu.go:182] provisioning hostname "functional-035643"
	I1212 00:35:59.082187  530956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
	I1212 00:35:59.099710  530956 main.go:143] libmachine: Using SSH client type: native
	I1212 00:35:59.100009  530956 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1212 00:35:59.100017  530956 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-035643 && echo "functional-035643" | sudo tee /etc/hostname
	I1212 00:35:59.259555  530956 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-035643
	
	I1212 00:35:59.259640  530956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
	I1212 00:35:59.277248  530956 main.go:143] libmachine: Using SSH client type: native
	I1212 00:35:59.277556  530956 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1212 00:35:59.277570  530956 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-035643' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-035643/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-035643' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 00:35:59.427001  530956 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 00:35:59.427018  530956 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22101-487723/.minikube CaCertPath:/home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22101-487723/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22101-487723/.minikube}
	I1212 00:35:59.427041  530956 ubuntu.go:190] setting up certificates
	I1212 00:35:59.427057  530956 provision.go:84] configureAuth start
	I1212 00:35:59.427116  530956 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-035643
	I1212 00:35:59.444510  530956 provision.go:143] copyHostCerts
	I1212 00:35:59.444577  530956 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-487723/.minikube/ca.pem, removing ...
	I1212 00:35:59.444584  530956 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-487723/.minikube/ca.pem
	I1212 00:35:59.444656  530956 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22101-487723/.minikube/ca.pem (1078 bytes)
	I1212 00:35:59.444762  530956 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-487723/.minikube/cert.pem, removing ...
	I1212 00:35:59.444766  530956 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-487723/.minikube/cert.pem
	I1212 00:35:59.444790  530956 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22101-487723/.minikube/cert.pem (1123 bytes)
	I1212 00:35:59.444853  530956 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-487723/.minikube/key.pem, removing ...
	I1212 00:35:59.444856  530956 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-487723/.minikube/key.pem
	I1212 00:35:59.444879  530956 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22101-487723/.minikube/key.pem (1679 bytes)
	I1212 00:35:59.444932  530956 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22101-487723/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca-key.pem org=jenkins.functional-035643 san=[127.0.0.1 192.168.49.2 functional-035643 localhost minikube]
	I1212 00:35:59.773887  530956 provision.go:177] copyRemoteCerts
	I1212 00:35:59.773940  530956 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 00:35:59.773979  530956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
	I1212 00:35:59.792006  530956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/functional-035643/id_rsa Username:docker}
	I1212 00:35:59.898459  530956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 00:35:59.916125  530956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 00:35:59.934437  530956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 00:35:59.951804  530956 provision.go:87] duration metric: took 524.726096ms to configureAuth
	I1212 00:35:59.951820  530956 ubuntu.go:206] setting minikube options for container-runtime
	I1212 00:35:59.952018  530956 config.go:182] Loaded profile config "functional-035643": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 00:35:59.952114  530956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
	I1212 00:35:59.968939  530956 main.go:143] libmachine: Using SSH client type: native
	I1212 00:35:59.969228  530956 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1212 00:35:59.969239  530956 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 00:36:00.563754  530956 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 00:36:00.563766  530956 machine.go:97] duration metric: took 1.663381425s to provisionDockerMachine
	I1212 00:36:00.563776  530956 start.go:293] postStartSetup for "functional-035643" (driver="docker")
	I1212 00:36:00.563787  530956 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 00:36:00.563864  530956 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 00:36:00.563909  530956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
	I1212 00:36:00.587628  530956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/functional-035643/id_rsa Username:docker}
	I1212 00:36:00.694584  530956 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 00:36:00.698084  530956 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 00:36:00.698101  530956 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 00:36:00.698111  530956 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-487723/.minikube/addons for local assets ...
	I1212 00:36:00.698167  530956 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-487723/.minikube/files for local assets ...
	I1212 00:36:00.698253  530956 filesync.go:149] local asset: /home/jenkins/minikube-integration/22101-487723/.minikube/files/etc/ssl/certs/4909542.pem -> 4909542.pem in /etc/ssl/certs
	I1212 00:36:00.698337  530956 filesync.go:149] local asset: /home/jenkins/minikube-integration/22101-487723/.minikube/files/etc/test/nested/copy/490954/hosts -> hosts in /etc/test/nested/copy/490954
	I1212 00:36:00.698388  530956 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/490954
	I1212 00:36:00.706001  530956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/files/etc/ssl/certs/4909542.pem --> /etc/ssl/certs/4909542.pem (1708 bytes)
	I1212 00:36:00.723687  530956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/files/etc/test/nested/copy/490954/hosts --> /etc/test/nested/copy/490954/hosts (40 bytes)
	I1212 00:36:00.741785  530956 start.go:296] duration metric: took 177.995516ms for postStartSetup
	I1212 00:36:00.741883  530956 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 00:36:00.741922  530956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
	I1212 00:36:00.760230  530956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/functional-035643/id_rsa Username:docker}
	I1212 00:36:00.864012  530956 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 00:36:00.868713  530956 fix.go:56] duration metric: took 1.988777195s for fixHost
	I1212 00:36:00.868727  530956 start.go:83] releasing machines lock for "functional-035643", held for 1.988815594s
	I1212 00:36:00.868792  530956 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-035643
	I1212 00:36:00.885011  530956 ssh_runner.go:195] Run: cat /version.json
	I1212 00:36:00.885055  530956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
	I1212 00:36:00.885313  530956 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 00:36:00.885366  530956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
	I1212 00:36:00.906879  530956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/functional-035643/id_rsa Username:docker}
	I1212 00:36:00.908992  530956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/functional-035643/id_rsa Username:docker}
	I1212 00:36:01.113200  530956 ssh_runner.go:195] Run: systemctl --version
	I1212 00:36:01.120029  530956 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 00:36:01.159180  530956 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 00:36:01.163912  530956 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 00:36:01.163983  530956 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 00:36:01.172622  530956 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 00:36:01.172636  530956 start.go:496] detecting cgroup driver to use...
	I1212 00:36:01.172680  530956 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 00:36:01.172728  530956 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 00:36:01.189532  530956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 00:36:01.203890  530956 docker.go:218] disabling cri-docker service (if available) ...
	I1212 00:36:01.203963  530956 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 00:36:01.220816  530956 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 00:36:01.234536  530956 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 00:36:01.370158  530956 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 00:36:01.488527  530956 docker.go:234] disabling docker service ...
	I1212 00:36:01.488594  530956 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 00:36:01.503932  530956 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 00:36:01.516796  530956 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 00:36:01.637401  530956 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 00:36:01.761796  530956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 00:36:01.774534  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 00:36:01.788471  530956 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1212 00:36:01.788535  530956 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:36:01.797095  530956 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 00:36:01.797168  530956 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:36:01.806445  530956 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:36:01.815271  530956 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:36:01.824092  530956 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 00:36:01.832291  530956 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:36:01.841209  530956 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:36:01.851179  530956 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:36:01.859893  530956 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 00:36:01.867359  530956 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 00:36:01.874599  530956 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:36:01.993195  530956 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 00:36:02.173735  530956 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 00:36:02.173807  530956 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 00:36:02.177649  530956 start.go:564] Will wait 60s for crictl version
	I1212 00:36:02.177702  530956 ssh_runner.go:195] Run: which crictl
	I1212 00:36:02.181255  530956 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 00:36:02.206520  530956 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1212 00:36:02.206592  530956 ssh_runner.go:195] Run: crio --version
	I1212 00:36:02.236053  530956 ssh_runner.go:195] Run: crio --version
	I1212 00:36:02.270501  530956 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1212 00:36:02.273364  530956 cli_runner.go:164] Run: docker network inspect functional-035643 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 00:36:02.289602  530956 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1212 00:36:02.296412  530956 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1212 00:36:02.299311  530956 kubeadm.go:884] updating cluster {Name:functional-035643 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-035643 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 00:36:02.299467  530956 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1212 00:36:02.299536  530956 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:36:02.337479  530956 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 00:36:02.337493  530956 crio.go:433] Images already preloaded, skipping extraction
	I1212 00:36:02.337550  530956 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:36:02.363122  530956 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 00:36:02.363134  530956 cache_images.go:86] Images are preloaded, skipping loading
	I1212 00:36:02.363141  530956 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1212 00:36:02.363237  530956 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-035643 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-035643 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 00:36:02.363318  530956 ssh_runner.go:195] Run: crio config
	I1212 00:36:02.413513  530956 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1212 00:36:02.413532  530956 cni.go:84] Creating CNI manager for ""
	I1212 00:36:02.413540  530956 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 00:36:02.413548  530956 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 00:36:02.413569  530956 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-035643 NodeName:functional-035643 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfig
Opts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 00:36:02.413686  530956 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-035643"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 00:36:02.413753  530956 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 00:36:02.421266  530956 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 00:36:02.421324  530956 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 00:36:02.428464  530956 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1212 00:36:02.441052  530956 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1212 00:36:02.453157  530956 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2071 bytes)
	I1212 00:36:02.466066  530956 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1212 00:36:02.472532  530956 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:36:02.578480  530956 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:36:02.719058  530956 certs.go:69] Setting up /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643 for IP: 192.168.49.2
	I1212 00:36:02.719069  530956 certs.go:195] generating shared ca certs ...
	I1212 00:36:02.719086  530956 certs.go:227] acquiring lock for ca certs: {Name:mk856824cf2126fa3d2975ef18e195b6ab1234f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:36:02.719283  530956 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22101-487723/.minikube/ca.key
	I1212 00:36:02.719337  530956 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22101-487723/.minikube/proxy-client-ca.key
	I1212 00:36:02.719344  530956 certs.go:257] generating profile certs ...
	I1212 00:36:02.719449  530956 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/client.key
	I1212 00:36:02.719541  530956 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/apiserver.key.8a9a2493
	I1212 00:36:02.719585  530956 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/proxy-client.key
	I1212 00:36:02.719735  530956 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/490954.pem (1338 bytes)
	W1212 00:36:02.719767  530956 certs.go:480] ignoring /home/jenkins/minikube-integration/22101-487723/.minikube/certs/490954_empty.pem, impossibly tiny 0 bytes
	I1212 00:36:02.719779  530956 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca-key.pem (1679 bytes)
	I1212 00:36:02.719809  530956 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca.pem (1078 bytes)
	I1212 00:36:02.719833  530956 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/cert.pem (1123 bytes)
	I1212 00:36:02.719859  530956 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/key.pem (1679 bytes)
	I1212 00:36:02.719902  530956 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-487723/.minikube/files/etc/ssl/certs/4909542.pem (1708 bytes)
	I1212 00:36:02.720656  530956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 00:36:02.742914  530956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 00:36:02.761747  530956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 00:36:02.779250  530956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 00:36:02.796535  530956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1212 00:36:02.813979  530956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 00:36:02.832344  530956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 00:36:02.850165  530956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 00:36:02.867847  530956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/files/etc/ssl/certs/4909542.pem --> /usr/share/ca-certificates/4909542.pem (1708 bytes)
	I1212 00:36:02.887774  530956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 00:36:02.905148  530956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/certs/490954.pem --> /usr/share/ca-certificates/490954.pem (1338 bytes)
	I1212 00:36:02.923137  530956 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 00:36:02.936200  530956 ssh_runner.go:195] Run: openssl version
	I1212 00:36:02.943771  530956 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/490954.pem
	I1212 00:36:02.951677  530956 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/490954.pem /etc/ssl/certs/490954.pem
	I1212 00:36:02.959104  530956 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/490954.pem
	I1212 00:36:02.962881  530956 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 00:21 /usr/share/ca-certificates/490954.pem
	I1212 00:36:02.962937  530956 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/490954.pem
	I1212 00:36:03.006038  530956 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 00:36:03.014202  530956 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4909542.pem
	I1212 00:36:03.022168  530956 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4909542.pem /etc/ssl/certs/4909542.pem
	I1212 00:36:03.030174  530956 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4909542.pem
	I1212 00:36:03.033892  530956 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 00:21 /usr/share/ca-certificates/4909542.pem
	I1212 00:36:03.033949  530956 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4909542.pem
	I1212 00:36:03.075143  530956 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 00:36:03.082587  530956 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:36:03.089740  530956 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 00:36:03.097209  530956 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:36:03.100982  530956 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 00:11 /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:36:03.101039  530956 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:36:03.141961  530956 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 00:36:03.149082  530956 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 00:36:03.152710  530956 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 00:36:03.193308  530956 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 00:36:03.236349  530956 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 00:36:03.279368  530956 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 00:36:03.320758  530956 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 00:36:03.362313  530956 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 00:36:03.403564  530956 kubeadm.go:401] StartCluster: {Name:functional-035643 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-035643 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:36:03.403639  530956 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 00:36:03.403697  530956 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 00:36:03.429883  530956 cri.go:89] found id: ""
	I1212 00:36:03.429959  530956 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 00:36:03.437518  530956 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1212 00:36:03.437528  530956 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1212 00:36:03.437580  530956 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 00:36:03.444705  530956 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:36:03.445211  530956 kubeconfig.go:125] found "functional-035643" server: "https://192.168.49.2:8441"
	I1212 00:36:03.446485  530956 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 00:36:03.453928  530956 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-12 00:21:24.717912452 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-12 00:36:02.461560447 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1212 00:36:03.453947  530956 kubeadm.go:1161] stopping kube-system containers ...
	I1212 00:36:03.453959  530956 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1212 00:36:03.454013  530956 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 00:36:03.481725  530956 cri.go:89] found id: ""
	I1212 00:36:03.481784  530956 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 00:36:03.499216  530956 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 00:36:03.507872  530956 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec 12 00:25 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5640 Dec 12 00:25 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec 12 00:25 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec 12 00:25 /etc/kubernetes/scheduler.conf
	
	I1212 00:36:03.507966  530956 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1212 00:36:03.516663  530956 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1212 00:36:03.524482  530956 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:36:03.524541  530956 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 00:36:03.532121  530956 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1212 00:36:03.539690  530956 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:36:03.539749  530956 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 00:36:03.547386  530956 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1212 00:36:03.555458  530956 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:36:03.555515  530956 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 00:36:03.563050  530956 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 00:36:03.570932  530956 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 00:36:03.615951  530956 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 00:36:05.017170  530956 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.401194576s)
	I1212 00:36:05.017241  530956 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 00:36:05.218047  530956 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 00:36:05.283161  530956 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 00:36:05.326722  530956 api_server.go:52] waiting for apiserver process to appear ...
	I1212 00:36:05.326794  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:05.827661  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:06.327088  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:06.826877  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:07.327696  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:07.827369  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:08.326870  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:08.827729  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:09.327318  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:09.826994  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:10.326971  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:10.827897  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:11.327939  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:11.827793  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:12.327004  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:12.826881  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:13.327847  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:13.827583  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:14.326945  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:14.827753  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:15.327041  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:15.826900  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:16.327889  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:16.827790  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:17.327551  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:17.826998  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:18.326988  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:18.827579  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:19.326992  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:19.827594  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:20.326867  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:20.827772  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:21.327317  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:21.827925  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:22.327001  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:22.826975  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:23.326960  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:23.826929  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:24.327674  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:24.827495  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:25.326930  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:25.827519  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:26.327962  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:26.827658  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:27.327532  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:27.826969  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:28.327685  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:28.827358  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:29.327926  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:29.826958  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:30.327782  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:30.827884  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:31.327105  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:31.826992  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:32.327681  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:32.827190  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:33.327886  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:33.827647  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:34.327016  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:34.827023  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:35.326882  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:35.827613  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:36.327026  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:36.827579  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:37.327817  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:37.827703  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:38.326889  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:38.827613  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:39.326979  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:39.827741  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:40.327124  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:40.827016  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:41.327889  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:41.827782  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:42.327587  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:42.827812  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:43.327751  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:43.826992  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:44.326981  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:44.826901  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:45.327712  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:45.826871  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:46.327774  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:46.827801  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:47.326976  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:47.827799  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:48.326988  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:48.827011  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:49.326914  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:49.826960  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:50.326958  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:50.827662  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:51.326988  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:51.826946  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:52.327667  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:52.827358  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:53.327906  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:53.827656  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:54.327035  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:54.827870  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:55.327685  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:55.827299  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:56.327742  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:56.827766  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:57.327012  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:57.827860  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:58.326990  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:58.827280  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:59.327889  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:59.826878  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:00.327933  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:00.826966  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:01.327339  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:01.827905  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:02.327586  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:02.827346  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:03.326967  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:03.827912  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:04.327657  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:04.827730  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:05.326940  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:37:05.327024  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:37:05.355488  530956 cri.go:89] found id: ""
	I1212 00:37:05.355502  530956 logs.go:282] 0 containers: []
	W1212 00:37:05.355509  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:37:05.355514  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:37:05.355580  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:37:05.379984  530956 cri.go:89] found id: ""
	I1212 00:37:05.379998  530956 logs.go:282] 0 containers: []
	W1212 00:37:05.380005  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:37:05.380010  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:37:05.380068  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:37:05.404986  530956 cri.go:89] found id: ""
	I1212 00:37:05.405001  530956 logs.go:282] 0 containers: []
	W1212 00:37:05.405010  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:37:05.405015  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:37:05.405072  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:37:05.429349  530956 cri.go:89] found id: ""
	I1212 00:37:05.429363  530956 logs.go:282] 0 containers: []
	W1212 00:37:05.429370  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:37:05.429375  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:37:05.429438  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:37:05.453950  530956 cri.go:89] found id: ""
	I1212 00:37:05.453963  530956 logs.go:282] 0 containers: []
	W1212 00:37:05.453970  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:37:05.453975  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:37:05.454030  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:37:05.481105  530956 cri.go:89] found id: ""
	I1212 00:37:05.481118  530956 logs.go:282] 0 containers: []
	W1212 00:37:05.481126  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:37:05.481131  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:37:05.481188  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:37:05.506041  530956 cri.go:89] found id: ""
	I1212 00:37:05.506054  530956 logs.go:282] 0 containers: []
	W1212 00:37:05.506062  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:37:05.506069  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:37:05.506079  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:37:05.575208  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:37:05.575226  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:37:05.602842  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:37:05.602858  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:37:05.674408  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:37:05.674425  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:37:05.688466  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:37:05.688482  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:37:05.756639  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:37:05.748526   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:05.749193   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:05.750701   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:05.751299   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:05.752883   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:37:05.748526   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:05.749193   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:05.750701   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:05.751299   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:05.752883   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:37:08.256849  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:08.268489  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:37:08.268547  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:37:08.294558  530956 cri.go:89] found id: ""
	I1212 00:37:08.294571  530956 logs.go:282] 0 containers: []
	W1212 00:37:08.294578  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:37:08.294583  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:37:08.294647  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:37:08.324264  530956 cri.go:89] found id: ""
	I1212 00:37:08.324277  530956 logs.go:282] 0 containers: []
	W1212 00:37:08.324284  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:37:08.324289  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:37:08.324345  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:37:08.349672  530956 cri.go:89] found id: ""
	I1212 00:37:08.349685  530956 logs.go:282] 0 containers: []
	W1212 00:37:08.349692  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:37:08.349697  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:37:08.349755  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:37:08.375495  530956 cri.go:89] found id: ""
	I1212 00:37:08.375509  530956 logs.go:282] 0 containers: []
	W1212 00:37:08.375516  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:37:08.375521  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:37:08.375579  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:37:08.405282  530956 cri.go:89] found id: ""
	I1212 00:37:08.405305  530956 logs.go:282] 0 containers: []
	W1212 00:37:08.405312  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:37:08.405317  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:37:08.405384  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:37:08.431165  530956 cri.go:89] found id: ""
	I1212 00:37:08.431178  530956 logs.go:282] 0 containers: []
	W1212 00:37:08.431185  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:37:08.431190  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:37:08.431255  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:37:08.456458  530956 cri.go:89] found id: ""
	I1212 00:37:08.456472  530956 logs.go:282] 0 containers: []
	W1212 00:37:08.456479  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:37:08.456487  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:37:08.456498  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:37:08.470633  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:37:08.470647  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:37:08.537226  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:37:08.528672   11079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:08.529056   11079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:08.530703   11079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:08.531301   11079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:08.532944   11079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:37:08.528672   11079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:08.529056   11079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:08.530703   11079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:08.531301   11079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:08.532944   11079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:37:08.537245  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:37:08.537256  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:37:08.606512  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:37:08.606534  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:37:08.634126  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:37:08.634142  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:37:11.201712  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:11.211510  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:37:11.211571  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:37:11.249104  530956 cri.go:89] found id: ""
	I1212 00:37:11.249118  530956 logs.go:282] 0 containers: []
	W1212 00:37:11.249135  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:37:11.249141  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:37:11.249214  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:37:11.285113  530956 cri.go:89] found id: ""
	I1212 00:37:11.285132  530956 logs.go:282] 0 containers: []
	W1212 00:37:11.285143  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:37:11.285148  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:37:11.285218  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:37:11.315788  530956 cri.go:89] found id: ""
	I1212 00:37:11.315802  530956 logs.go:282] 0 containers: []
	W1212 00:37:11.315809  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:37:11.315814  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:37:11.315875  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:37:11.346544  530956 cri.go:89] found id: ""
	I1212 00:37:11.346558  530956 logs.go:282] 0 containers: []
	W1212 00:37:11.346565  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:37:11.346571  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:37:11.346629  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:37:11.376168  530956 cri.go:89] found id: ""
	I1212 00:37:11.376192  530956 logs.go:282] 0 containers: []
	W1212 00:37:11.376199  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:37:11.376205  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:37:11.376274  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:37:11.401416  530956 cri.go:89] found id: ""
	I1212 00:37:11.401430  530956 logs.go:282] 0 containers: []
	W1212 00:37:11.401437  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:37:11.401442  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:37:11.401501  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:37:11.426005  530956 cri.go:89] found id: ""
	I1212 00:37:11.426019  530956 logs.go:282] 0 containers: []
	W1212 00:37:11.426026  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:37:11.426034  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:37:11.426044  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:37:11.440817  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:37:11.440832  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:37:11.505805  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:37:11.496652   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:11.497359   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:11.499136   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:11.499679   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:11.501366   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:37:11.496652   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:11.497359   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:11.499136   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:11.499679   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:11.501366   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:37:11.505819  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:37:11.505832  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:37:11.581171  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:37:11.581192  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:37:11.614667  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:37:11.614699  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:37:14.182453  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:14.192683  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:37:14.192743  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:37:14.224011  530956 cri.go:89] found id: ""
	I1212 00:37:14.224025  530956 logs.go:282] 0 containers: []
	W1212 00:37:14.224032  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:37:14.224037  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:37:14.224097  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:37:14.253937  530956 cri.go:89] found id: ""
	I1212 00:37:14.253951  530956 logs.go:282] 0 containers: []
	W1212 00:37:14.253958  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:37:14.253963  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:37:14.254034  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:37:14.291025  530956 cri.go:89] found id: ""
	I1212 00:37:14.291039  530956 logs.go:282] 0 containers: []
	W1212 00:37:14.291047  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:37:14.291057  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:37:14.291117  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:37:14.318045  530956 cri.go:89] found id: ""
	I1212 00:37:14.318059  530956 logs.go:282] 0 containers: []
	W1212 00:37:14.318066  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:37:14.318072  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:37:14.318133  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:37:14.345053  530956 cri.go:89] found id: ""
	I1212 00:37:14.345074  530956 logs.go:282] 0 containers: []
	W1212 00:37:14.345082  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:37:14.345087  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:37:14.345151  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:37:14.370315  530956 cri.go:89] found id: ""
	I1212 00:37:14.370328  530956 logs.go:282] 0 containers: []
	W1212 00:37:14.370335  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:37:14.370340  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:37:14.370397  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:37:14.400128  530956 cri.go:89] found id: ""
	I1212 00:37:14.400142  530956 logs.go:282] 0 containers: []
	W1212 00:37:14.400149  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:37:14.400156  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:37:14.400166  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:37:14.469510  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:37:14.469528  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:37:14.497946  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:37:14.497962  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:37:14.567259  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:37:14.567276  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:37:14.581753  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:37:14.581768  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:37:14.649334  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:37:14.641152   11304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:14.642071   11304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:14.643581   11304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:14.644076   11304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:14.645435   11304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:37:14.641152   11304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:14.642071   11304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:14.643581   11304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:14.644076   11304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:14.645435   11304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:37:17.151022  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:17.161375  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:37:17.161433  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:37:17.187128  530956 cri.go:89] found id: ""
	I1212 00:37:17.187144  530956 logs.go:282] 0 containers: []
	W1212 00:37:17.187151  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:37:17.187157  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:37:17.187224  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:37:17.212545  530956 cri.go:89] found id: ""
	I1212 00:37:17.212560  530956 logs.go:282] 0 containers: []
	W1212 00:37:17.212567  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:37:17.212573  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:37:17.212632  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:37:17.239817  530956 cri.go:89] found id: ""
	I1212 00:37:17.239831  530956 logs.go:282] 0 containers: []
	W1212 00:37:17.239838  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:37:17.239843  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:37:17.239900  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:37:17.267133  530956 cri.go:89] found id: ""
	I1212 00:37:17.267147  530956 logs.go:282] 0 containers: []
	W1212 00:37:17.267155  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:37:17.267160  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:37:17.267232  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:37:17.304534  530956 cri.go:89] found id: ""
	I1212 00:37:17.304548  530956 logs.go:282] 0 containers: []
	W1212 00:37:17.304554  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:37:17.304559  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:37:17.304618  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:37:17.330052  530956 cri.go:89] found id: ""
	I1212 00:37:17.330066  530956 logs.go:282] 0 containers: []
	W1212 00:37:17.330073  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:37:17.330078  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:37:17.330133  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:37:17.354652  530956 cri.go:89] found id: ""
	I1212 00:37:17.354671  530956 logs.go:282] 0 containers: []
	W1212 00:37:17.354678  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:37:17.354705  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:37:17.354715  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:37:17.421755  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:37:17.412804   11395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:17.413382   11395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:17.415079   11395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:17.415827   11395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:17.417552   11395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:37:17.412804   11395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:17.413382   11395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:17.415079   11395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:17.415827   11395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:17.417552   11395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:37:17.421766  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:37:17.421779  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:37:17.496810  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:37:17.496835  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:37:17.525867  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:37:17.525886  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:37:17.594454  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:37:17.594475  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:37:20.109774  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:20.119858  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:37:20.119916  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:37:20.148052  530956 cri.go:89] found id: ""
	I1212 00:37:20.148066  530956 logs.go:282] 0 containers: []
	W1212 00:37:20.148073  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:37:20.148078  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:37:20.148138  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:37:20.172308  530956 cri.go:89] found id: ""
	I1212 00:37:20.172322  530956 logs.go:282] 0 containers: []
	W1212 00:37:20.172329  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:37:20.172334  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:37:20.172392  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:37:20.200721  530956 cri.go:89] found id: ""
	I1212 00:37:20.200735  530956 logs.go:282] 0 containers: []
	W1212 00:37:20.200743  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:37:20.200748  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:37:20.200807  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:37:20.232123  530956 cri.go:89] found id: ""
	I1212 00:37:20.232136  530956 logs.go:282] 0 containers: []
	W1212 00:37:20.232143  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:37:20.232148  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:37:20.232207  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:37:20.263625  530956 cri.go:89] found id: ""
	I1212 00:37:20.263638  530956 logs.go:282] 0 containers: []
	W1212 00:37:20.263646  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:37:20.263651  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:37:20.263710  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:37:20.292234  530956 cri.go:89] found id: ""
	I1212 00:37:20.292248  530956 logs.go:282] 0 containers: []
	W1212 00:37:20.292255  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:37:20.292260  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:37:20.292319  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:37:20.316784  530956 cri.go:89] found id: ""
	I1212 00:37:20.316798  530956 logs.go:282] 0 containers: []
	W1212 00:37:20.316804  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:37:20.316812  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:37:20.316822  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:37:20.382530  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:37:20.382550  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:37:20.397572  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:37:20.397587  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:37:20.462516  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:37:20.453480   11508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:20.454137   11508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:20.455857   11508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:20.456349   11508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:20.458004   11508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:37:20.453480   11508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:20.454137   11508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:20.455857   11508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:20.456349   11508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:20.458004   11508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:37:20.462526  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:37:20.462536  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:37:20.536302  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:37:20.536323  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:37:23.067516  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:23.077747  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:37:23.077816  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:37:23.102753  530956 cri.go:89] found id: ""
	I1212 00:37:23.102767  530956 logs.go:282] 0 containers: []
	W1212 00:37:23.102774  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:37:23.102780  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:37:23.102845  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:37:23.128706  530956 cri.go:89] found id: ""
	I1212 00:37:23.128719  530956 logs.go:282] 0 containers: []
	W1212 00:37:23.128727  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:37:23.128732  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:37:23.128792  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:37:23.154481  530956 cri.go:89] found id: ""
	I1212 00:37:23.154495  530956 logs.go:282] 0 containers: []
	W1212 00:37:23.154502  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:37:23.154507  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:37:23.154572  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:37:23.179609  530956 cri.go:89] found id: ""
	I1212 00:37:23.179622  530956 logs.go:282] 0 containers: []
	W1212 00:37:23.179630  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:37:23.179635  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:37:23.179699  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:37:23.205151  530956 cri.go:89] found id: ""
	I1212 00:37:23.205165  530956 logs.go:282] 0 containers: []
	W1212 00:37:23.205172  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:37:23.205177  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:37:23.205238  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:37:23.242297  530956 cri.go:89] found id: ""
	I1212 00:37:23.242312  530956 logs.go:282] 0 containers: []
	W1212 00:37:23.242319  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:37:23.242324  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:37:23.242393  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:37:23.271432  530956 cri.go:89] found id: ""
	I1212 00:37:23.271446  530956 logs.go:282] 0 containers: []
	W1212 00:37:23.271453  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:37:23.271461  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:37:23.271472  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:37:23.339885  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:37:23.339904  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:37:23.355098  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:37:23.355115  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:37:23.419229  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:37:23.410980   11610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:23.411565   11610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:23.413072   11610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:23.413369   11610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:23.415026   11610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:37:23.410980   11610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:23.411565   11610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:23.413072   11610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:23.413369   11610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:23.415026   11610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:37:23.419240  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:37:23.419250  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:37:23.486458  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:37:23.486478  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:37:26.021866  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:26.032710  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:37:26.032772  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:37:26.058774  530956 cri.go:89] found id: ""
	I1212 00:37:26.058811  530956 logs.go:282] 0 containers: []
	W1212 00:37:26.058818  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:37:26.058824  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:37:26.058887  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:37:26.084731  530956 cri.go:89] found id: ""
	I1212 00:37:26.084746  530956 logs.go:282] 0 containers: []
	W1212 00:37:26.084753  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:37:26.084758  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:37:26.084821  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:37:26.110515  530956 cri.go:89] found id: ""
	I1212 00:37:26.110529  530956 logs.go:282] 0 containers: []
	W1212 00:37:26.110536  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:37:26.110541  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:37:26.110598  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:37:26.137082  530956 cri.go:89] found id: ""
	I1212 00:37:26.137095  530956 logs.go:282] 0 containers: []
	W1212 00:37:26.137103  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:37:26.137112  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:37:26.137172  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:37:26.162724  530956 cri.go:89] found id: ""
	I1212 00:37:26.162738  530956 logs.go:282] 0 containers: []
	W1212 00:37:26.162745  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:37:26.162751  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:37:26.162818  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:37:26.188538  530956 cri.go:89] found id: ""
	I1212 00:37:26.188559  530956 logs.go:282] 0 containers: []
	W1212 00:37:26.188566  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:37:26.188571  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:37:26.188630  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:37:26.219848  530956 cri.go:89] found id: ""
	I1212 00:37:26.219862  530956 logs.go:282] 0 containers: []
	W1212 00:37:26.219869  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:37:26.219876  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:37:26.219887  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:37:26.291444  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:37:26.291463  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:37:26.306938  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:37:26.306954  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:37:26.368571  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:37:26.360215   11716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:26.360983   11716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:26.362459   11716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:26.362990   11716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:26.364656   11716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:37:26.360215   11716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:26.360983   11716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:26.362459   11716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:26.362990   11716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:26.364656   11716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:37:26.368581  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:37:26.368593  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:37:26.436229  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:37:26.436247  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:37:28.966999  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:28.976928  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:37:28.976991  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:37:29.003108  530956 cri.go:89] found id: ""
	I1212 00:37:29.003123  530956 logs.go:282] 0 containers: []
	W1212 00:37:29.003130  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:37:29.003136  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:37:29.003212  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:37:29.028803  530956 cri.go:89] found id: ""
	I1212 00:37:29.028817  530956 logs.go:282] 0 containers: []
	W1212 00:37:29.028824  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:37:29.028828  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:37:29.028885  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:37:29.056738  530956 cri.go:89] found id: ""
	I1212 00:37:29.056758  530956 logs.go:282] 0 containers: []
	W1212 00:37:29.056765  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:37:29.056770  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:37:29.056828  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:37:29.081270  530956 cri.go:89] found id: ""
	I1212 00:37:29.081284  530956 logs.go:282] 0 containers: []
	W1212 00:37:29.081291  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:37:29.081297  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:37:29.081354  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:37:29.106545  530956 cri.go:89] found id: ""
	I1212 00:37:29.106559  530956 logs.go:282] 0 containers: []
	W1212 00:37:29.106566  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:37:29.106571  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:37:29.106629  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:37:29.133248  530956 cri.go:89] found id: ""
	I1212 00:37:29.133262  530956 logs.go:282] 0 containers: []
	W1212 00:37:29.133270  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:37:29.133275  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:37:29.133335  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:37:29.162606  530956 cri.go:89] found id: ""
	I1212 00:37:29.162620  530956 logs.go:282] 0 containers: []
	W1212 00:37:29.162627  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:37:29.162634  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:37:29.162645  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:37:29.228360  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:37:29.228380  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:37:29.244576  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:37:29.244593  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:37:29.318498  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:37:29.310629   11815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:29.311117   11815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:29.312690   11815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:29.313032   11815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:29.314613   11815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:37:29.310629   11815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:29.311117   11815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:29.312690   11815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:29.313032   11815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:29.314613   11815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:37:29.318508  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:37:29.318519  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:37:29.386989  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:37:29.387009  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:37:31.922335  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:31.932487  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:37:31.932555  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:37:31.958330  530956 cri.go:89] found id: ""
	I1212 00:37:31.958344  530956 logs.go:282] 0 containers: []
	W1212 00:37:31.958351  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:37:31.958356  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:37:31.958413  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:37:31.986166  530956 cri.go:89] found id: ""
	I1212 00:37:31.986184  530956 logs.go:282] 0 containers: []
	W1212 00:37:31.986193  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:37:31.986198  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:37:31.986263  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:37:32.018215  530956 cri.go:89] found id: ""
	I1212 00:37:32.018229  530956 logs.go:282] 0 containers: []
	W1212 00:37:32.018236  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:37:32.018241  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:37:32.018309  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:37:32.045496  530956 cri.go:89] found id: ""
	I1212 00:37:32.045510  530956 logs.go:282] 0 containers: []
	W1212 00:37:32.045526  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:37:32.045531  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:37:32.045599  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:37:32.071713  530956 cri.go:89] found id: ""
	I1212 00:37:32.071727  530956 logs.go:282] 0 containers: []
	W1212 00:37:32.071733  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:37:32.071748  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:37:32.071809  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:37:32.096398  530956 cri.go:89] found id: ""
	I1212 00:37:32.096412  530956 logs.go:282] 0 containers: []
	W1212 00:37:32.096419  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:37:32.096424  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:37:32.096481  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:37:32.121995  530956 cri.go:89] found id: ""
	I1212 00:37:32.122009  530956 logs.go:282] 0 containers: []
	W1212 00:37:32.122016  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:37:32.122024  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:37:32.122033  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:37:32.187537  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:37:32.187556  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:37:32.202073  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:37:32.202088  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:37:32.283678  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:37:32.269254   11915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:32.269661   11915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:32.271076   11915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:32.271701   11915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:32.275343   11915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:37:32.269254   11915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:32.269661   11915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:32.271076   11915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:32.271701   11915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:32.275343   11915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:37:32.283688  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:37:32.283699  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:37:32.352426  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:37:32.352446  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:37:34.887315  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:34.897374  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:37:34.897440  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:37:34.922626  530956 cri.go:89] found id: ""
	I1212 00:37:34.922641  530956 logs.go:282] 0 containers: []
	W1212 00:37:34.922648  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:37:34.922654  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:37:34.922741  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:37:34.948176  530956 cri.go:89] found id: ""
	I1212 00:37:34.948190  530956 logs.go:282] 0 containers: []
	W1212 00:37:34.948199  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:37:34.948204  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:37:34.948302  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:37:34.975855  530956 cri.go:89] found id: ""
	I1212 00:37:34.975869  530956 logs.go:282] 0 containers: []
	W1212 00:37:34.975883  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:37:34.975889  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:37:34.975954  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:37:35.008030  530956 cri.go:89] found id: ""
	I1212 00:37:35.008046  530956 logs.go:282] 0 containers: []
	W1212 00:37:35.008054  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:37:35.008060  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:37:35.008144  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:37:35.033803  530956 cri.go:89] found id: ""
	I1212 00:37:35.033816  530956 logs.go:282] 0 containers: []
	W1212 00:37:35.033823  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:37:35.033828  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:37:35.033887  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:37:35.059521  530956 cri.go:89] found id: ""
	I1212 00:37:35.059535  530956 logs.go:282] 0 containers: []
	W1212 00:37:35.059542  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:37:35.059547  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:37:35.059604  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:37:35.084378  530956 cri.go:89] found id: ""
	I1212 00:37:35.084392  530956 logs.go:282] 0 containers: []
	W1212 00:37:35.084399  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:37:35.084406  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:37:35.084416  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:37:35.150144  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:37:35.150166  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:37:35.164295  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:37:35.164311  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:37:35.237720  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:37:35.229555   12019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:35.230202   12019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:35.231874   12019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:35.232277   12019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:35.233798   12019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:37:35.229555   12019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:35.230202   12019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:35.231874   12019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:35.232277   12019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:35.233798   12019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:37:35.237730  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:37:35.237740  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:37:35.309700  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:37:35.309721  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:37:37.842191  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:37.852127  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:37:37.852198  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:37:37.883852  530956 cri.go:89] found id: ""
	I1212 00:37:37.883866  530956 logs.go:282] 0 containers: []
	W1212 00:37:37.883873  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:37:37.883879  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:37:37.883940  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:37:37.908974  530956 cri.go:89] found id: ""
	I1212 00:37:37.908988  530956 logs.go:282] 0 containers: []
	W1212 00:37:37.908995  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:37:37.909000  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:37:37.909058  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:37:37.934558  530956 cri.go:89] found id: ""
	I1212 00:37:37.934581  530956 logs.go:282] 0 containers: []
	W1212 00:37:37.934588  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:37:37.934593  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:37:37.934659  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:37:37.960620  530956 cri.go:89] found id: ""
	I1212 00:37:37.960634  530956 logs.go:282] 0 containers: []
	W1212 00:37:37.960641  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:37:37.960653  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:37:37.960716  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:37:37.985545  530956 cri.go:89] found id: ""
	I1212 00:37:37.985559  530956 logs.go:282] 0 containers: []
	W1212 00:37:37.985566  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:37:37.985571  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:37:37.985649  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:37:38.019481  530956 cri.go:89] found id: ""
	I1212 00:37:38.019496  530956 logs.go:282] 0 containers: []
	W1212 00:37:38.019511  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:37:38.019517  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:37:38.019587  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:37:38.050591  530956 cri.go:89] found id: ""
	I1212 00:37:38.050606  530956 logs.go:282] 0 containers: []
	W1212 00:37:38.050613  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:37:38.050621  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:37:38.050631  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:37:38.118052  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:37:38.118073  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:37:38.133136  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:37:38.133152  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:37:38.195824  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:37:38.187464   12120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:38.188136   12120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:38.189863   12120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:38.190376   12120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:38.191908   12120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:37:38.187464   12120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:38.188136   12120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:38.189863   12120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:38.190376   12120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:38.191908   12120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:37:38.195836  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:37:38.195847  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:37:38.277789  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:37:38.277816  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:37:40.807649  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:40.817759  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:37:40.817820  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:37:40.843061  530956 cri.go:89] found id: ""
	I1212 00:37:40.843075  530956 logs.go:282] 0 containers: []
	W1212 00:37:40.843082  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:37:40.843087  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:37:40.843147  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:37:40.867922  530956 cri.go:89] found id: ""
	I1212 00:37:40.867936  530956 logs.go:282] 0 containers: []
	W1212 00:37:40.867944  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:37:40.867949  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:37:40.868005  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:37:40.892630  530956 cri.go:89] found id: ""
	I1212 00:37:40.892644  530956 logs.go:282] 0 containers: []
	W1212 00:37:40.892653  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:37:40.892657  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:37:40.892716  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:37:40.918166  530956 cri.go:89] found id: ""
	I1212 00:37:40.918180  530956 logs.go:282] 0 containers: []
	W1212 00:37:40.918187  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:37:40.918192  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:37:40.918250  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:37:40.944075  530956 cri.go:89] found id: ""
	I1212 00:37:40.944088  530956 logs.go:282] 0 containers: []
	W1212 00:37:40.944095  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:37:40.944100  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:37:40.944160  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:37:40.969320  530956 cri.go:89] found id: ""
	I1212 00:37:40.969333  530956 logs.go:282] 0 containers: []
	W1212 00:37:40.969340  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:37:40.969346  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:37:40.969405  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:37:40.997473  530956 cri.go:89] found id: ""
	I1212 00:37:40.997487  530956 logs.go:282] 0 containers: []
	W1212 00:37:40.997494  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:37:40.997501  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:37:40.997512  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:37:41.028728  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:37:41.028743  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:37:41.095087  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:37:41.095107  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:37:41.109485  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:37:41.109501  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:37:41.176844  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:37:41.166571   12234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:41.167470   12234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:41.170826   12234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:41.171336   12234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:41.172874   12234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:37:41.166571   12234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:41.167470   12234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:41.170826   12234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:41.171336   12234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:41.172874   12234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:37:41.176853  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:37:41.176864  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:37:43.749966  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:43.760058  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:37:43.760118  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:37:43.785533  530956 cri.go:89] found id: ""
	I1212 00:37:43.785546  530956 logs.go:282] 0 containers: []
	W1212 00:37:43.785554  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:37:43.785559  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:37:43.785616  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:37:43.812938  530956 cri.go:89] found id: ""
	I1212 00:37:43.812952  530956 logs.go:282] 0 containers: []
	W1212 00:37:43.812960  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:37:43.812964  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:37:43.813029  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:37:43.838583  530956 cri.go:89] found id: ""
	I1212 00:37:43.838596  530956 logs.go:282] 0 containers: []
	W1212 00:37:43.838604  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:37:43.838609  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:37:43.838669  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:37:43.864548  530956 cri.go:89] found id: ""
	I1212 00:37:43.864562  530956 logs.go:282] 0 containers: []
	W1212 00:37:43.864569  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:37:43.864574  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:37:43.864633  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:37:43.889391  530956 cri.go:89] found id: ""
	I1212 00:37:43.889405  530956 logs.go:282] 0 containers: []
	W1212 00:37:43.889412  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:37:43.889417  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:37:43.889478  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:37:43.914183  530956 cri.go:89] found id: ""
	I1212 00:37:43.914196  530956 logs.go:282] 0 containers: []
	W1212 00:37:43.914203  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:37:43.914209  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:37:43.914268  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:37:43.941097  530956 cri.go:89] found id: ""
	I1212 00:37:43.941112  530956 logs.go:282] 0 containers: []
	W1212 00:37:43.941119  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:37:43.941126  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:37:43.941136  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:37:44.007607  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:37:44.007625  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:37:44.022976  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:37:44.022993  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:37:44.087167  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:37:44.078521   12330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:44.079166   12330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:44.080973   12330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:44.081418   12330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:44.083213   12330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:37:44.078521   12330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:44.079166   12330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:44.080973   12330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:44.081418   12330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:44.083213   12330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:37:44.087177  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:37:44.087190  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:37:44.156045  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:37:44.156065  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:37:46.684537  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:46.694320  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:37:46.694383  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:37:46.718727  530956 cri.go:89] found id: ""
	I1212 00:37:46.718741  530956 logs.go:282] 0 containers: []
	W1212 00:37:46.718751  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:37:46.718756  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:37:46.718832  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:37:46.744753  530956 cri.go:89] found id: ""
	I1212 00:37:46.744767  530956 logs.go:282] 0 containers: []
	W1212 00:37:46.744774  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:37:46.744779  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:37:46.744838  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:37:46.773525  530956 cri.go:89] found id: ""
	I1212 00:37:46.773538  530956 logs.go:282] 0 containers: []
	W1212 00:37:46.773546  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:37:46.773551  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:37:46.773608  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:37:46.798518  530956 cri.go:89] found id: ""
	I1212 00:37:46.798532  530956 logs.go:282] 0 containers: []
	W1212 00:37:46.798539  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:37:46.798544  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:37:46.798602  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:37:46.822867  530956 cri.go:89] found id: ""
	I1212 00:37:46.822880  530956 logs.go:282] 0 containers: []
	W1212 00:37:46.822887  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:37:46.822893  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:37:46.822949  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:37:46.849825  530956 cri.go:89] found id: ""
	I1212 00:37:46.849839  530956 logs.go:282] 0 containers: []
	W1212 00:37:46.849846  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:37:46.849851  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:37:46.849909  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:37:46.874986  530956 cri.go:89] found id: ""
	I1212 00:37:46.874999  530956 logs.go:282] 0 containers: []
	W1212 00:37:46.875011  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:37:46.875019  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:37:46.875030  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:37:46.939887  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:37:46.931753   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:46.932307   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:46.933826   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:46.934346   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:46.936019   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:37:46.931753   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:46.932307   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:46.933826   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:46.934346   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:46.936019   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:37:46.939896  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:37:46.939909  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:37:47.008024  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:37:47.008044  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:37:47.036373  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:37:47.036388  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:37:47.101329  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:37:47.101347  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:37:49.616038  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:49.626178  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:37:49.626240  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:37:49.652682  530956 cri.go:89] found id: ""
	I1212 00:37:49.652696  530956 logs.go:282] 0 containers: []
	W1212 00:37:49.652703  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:37:49.652708  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:37:49.652766  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:37:49.679170  530956 cri.go:89] found id: ""
	I1212 00:37:49.679185  530956 logs.go:282] 0 containers: []
	W1212 00:37:49.679191  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:37:49.679197  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:37:49.679256  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:37:49.706504  530956 cri.go:89] found id: ""
	I1212 00:37:49.706518  530956 logs.go:282] 0 containers: []
	W1212 00:37:49.706526  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:37:49.706532  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:37:49.706592  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:37:49.732201  530956 cri.go:89] found id: ""
	I1212 00:37:49.732215  530956 logs.go:282] 0 containers: []
	W1212 00:37:49.732222  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:37:49.732227  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:37:49.732287  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:37:49.757094  530956 cri.go:89] found id: ""
	I1212 00:37:49.757107  530956 logs.go:282] 0 containers: []
	W1212 00:37:49.757115  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:37:49.757119  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:37:49.757178  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:37:49.785367  530956 cri.go:89] found id: ""
	I1212 00:37:49.785382  530956 logs.go:282] 0 containers: []
	W1212 00:37:49.785391  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:37:49.785396  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:37:49.785466  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:37:49.809132  530956 cri.go:89] found id: ""
	I1212 00:37:49.809145  530956 logs.go:282] 0 containers: []
	W1212 00:37:49.809152  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:37:49.809160  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:37:49.809171  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:37:49.874272  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:37:49.874291  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:37:49.888851  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:37:49.888866  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:37:49.954139  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:37:49.945852   12539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:49.946386   12539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:49.948180   12539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:49.948551   12539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:49.950144   12539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:37:49.945852   12539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:49.946386   12539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:49.948180   12539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:49.948551   12539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:49.950144   12539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:37:49.954152  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:37:49.954164  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:37:50.021343  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:37:50.021364  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:37:52.550858  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:52.560788  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:37:52.560857  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:37:52.589542  530956 cri.go:89] found id: ""
	I1212 00:37:52.589556  530956 logs.go:282] 0 containers: []
	W1212 00:37:52.589563  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:37:52.589568  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:37:52.589629  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:37:52.613111  530956 cri.go:89] found id: ""
	I1212 00:37:52.613124  530956 logs.go:282] 0 containers: []
	W1212 00:37:52.613131  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:37:52.613136  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:37:52.613195  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:37:52.637059  530956 cri.go:89] found id: ""
	I1212 00:37:52.637072  530956 logs.go:282] 0 containers: []
	W1212 00:37:52.637079  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:37:52.637084  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:37:52.637142  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:37:52.661402  530956 cri.go:89] found id: ""
	I1212 00:37:52.661415  530956 logs.go:282] 0 containers: []
	W1212 00:37:52.661422  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:37:52.661428  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:37:52.661485  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:37:52.686208  530956 cri.go:89] found id: ""
	I1212 00:37:52.686221  530956 logs.go:282] 0 containers: []
	W1212 00:37:52.686228  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:37:52.686234  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:37:52.686292  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:37:52.714239  530956 cri.go:89] found id: ""
	I1212 00:37:52.714257  530956 logs.go:282] 0 containers: []
	W1212 00:37:52.714272  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:37:52.714281  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:37:52.714360  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:37:52.738849  530956 cri.go:89] found id: ""
	I1212 00:37:52.738862  530956 logs.go:282] 0 containers: []
	W1212 00:37:52.738871  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:37:52.738878  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:37:52.738889  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:37:52.805309  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:37:52.796653   12638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:52.797158   12638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:52.798767   12638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:52.799405   12638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:52.800977   12638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:37:52.796653   12638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:52.797158   12638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:52.798767   12638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:52.799405   12638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:52.800977   12638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:37:52.805318  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:37:52.805329  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:37:52.873118  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:37:52.873138  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:37:52.901072  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:37:52.901088  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:37:52.967085  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:37:52.967104  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:37:55.482800  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:55.493703  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:37:55.493761  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:37:55.527575  530956 cri.go:89] found id: ""
	I1212 00:37:55.527588  530956 logs.go:282] 0 containers: []
	W1212 00:37:55.527595  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:37:55.527601  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:37:55.527663  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:37:55.552177  530956 cri.go:89] found id: ""
	I1212 00:37:55.552191  530956 logs.go:282] 0 containers: []
	W1212 00:37:55.552198  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:37:55.552203  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:37:55.552264  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:37:55.576968  530956 cri.go:89] found id: ""
	I1212 00:37:55.576981  530956 logs.go:282] 0 containers: []
	W1212 00:37:55.576988  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:37:55.576993  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:37:55.577054  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:37:55.603212  530956 cri.go:89] found id: ""
	I1212 00:37:55.603225  530956 logs.go:282] 0 containers: []
	W1212 00:37:55.603232  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:37:55.603237  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:37:55.603300  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:37:55.629922  530956 cri.go:89] found id: ""
	I1212 00:37:55.629936  530956 logs.go:282] 0 containers: []
	W1212 00:37:55.629943  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:37:55.629949  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:37:55.630009  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:37:55.659450  530956 cri.go:89] found id: ""
	I1212 00:37:55.659469  530956 logs.go:282] 0 containers: []
	W1212 00:37:55.659476  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:37:55.659482  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:37:55.659540  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:37:55.683953  530956 cri.go:89] found id: ""
	I1212 00:37:55.683967  530956 logs.go:282] 0 containers: []
	W1212 00:37:55.683974  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:37:55.683981  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:37:55.683991  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:37:55.752000  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:37:55.752019  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:37:55.781847  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:37:55.781863  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:37:55.846599  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:37:55.846617  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:37:55.861470  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:37:55.861487  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:37:55.927422  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:37:55.918837   12763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:55.919637   12763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:55.921344   12763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:55.921624   12763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:55.923175   12763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:37:55.918837   12763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:55.919637   12763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:55.921344   12763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:55.921624   12763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:55.923175   12763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:37:58.429107  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:58.438890  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:37:58.438951  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:37:58.463332  530956 cri.go:89] found id: ""
	I1212 00:37:58.463346  530956 logs.go:282] 0 containers: []
	W1212 00:37:58.463353  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:37:58.463358  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:37:58.463420  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:37:58.502844  530956 cri.go:89] found id: ""
	I1212 00:37:58.502859  530956 logs.go:282] 0 containers: []
	W1212 00:37:58.502866  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:37:58.502871  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:37:58.502934  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:37:58.535191  530956 cri.go:89] found id: ""
	I1212 00:37:58.535204  530956 logs.go:282] 0 containers: []
	W1212 00:37:58.535211  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:37:58.535216  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:37:58.535275  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:37:58.560276  530956 cri.go:89] found id: ""
	I1212 00:37:58.560290  530956 logs.go:282] 0 containers: []
	W1212 00:37:58.560296  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:37:58.560302  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:37:58.560360  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:37:58.585008  530956 cri.go:89] found id: ""
	I1212 00:37:58.585022  530956 logs.go:282] 0 containers: []
	W1212 00:37:58.585029  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:37:58.585034  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:37:58.585092  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:37:58.610668  530956 cri.go:89] found id: ""
	I1212 00:37:58.610704  530956 logs.go:282] 0 containers: []
	W1212 00:37:58.610712  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:37:58.610717  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:37:58.610791  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:37:58.633946  530956 cri.go:89] found id: ""
	I1212 00:37:58.633960  530956 logs.go:282] 0 containers: []
	W1212 00:37:58.633967  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:37:58.633974  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:37:58.633984  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:37:58.702859  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:37:58.702878  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:37:58.730459  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:37:58.730475  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:37:58.799001  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:37:58.799020  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:37:58.813707  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:37:58.813724  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:37:58.880292  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:37:58.871863   12866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:58.872482   12866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:58.874082   12866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:58.874638   12866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:58.876520   12866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:37:58.871863   12866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:58.872482   12866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:58.874082   12866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:58.874638   12866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:58.876520   12866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:38:01.380529  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:38:01.390377  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:38:01.390440  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:38:01.414742  530956 cri.go:89] found id: ""
	I1212 00:38:01.414755  530956 logs.go:282] 0 containers: []
	W1212 00:38:01.414763  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:38:01.414769  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:38:01.414848  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:38:01.440014  530956 cri.go:89] found id: ""
	I1212 00:38:01.440028  530956 logs.go:282] 0 containers: []
	W1212 00:38:01.440035  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:38:01.440040  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:38:01.440100  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:38:01.469919  530956 cri.go:89] found id: ""
	I1212 00:38:01.469947  530956 logs.go:282] 0 containers: []
	W1212 00:38:01.469955  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:38:01.469963  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:38:01.470025  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:38:01.502102  530956 cri.go:89] found id: ""
	I1212 00:38:01.502116  530956 logs.go:282] 0 containers: []
	W1212 00:38:01.502123  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:38:01.502128  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:38:01.502185  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:38:01.550477  530956 cri.go:89] found id: ""
	I1212 00:38:01.550497  530956 logs.go:282] 0 containers: []
	W1212 00:38:01.550504  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:38:01.550509  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:38:01.550572  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:38:01.575848  530956 cri.go:89] found id: ""
	I1212 00:38:01.575861  530956 logs.go:282] 0 containers: []
	W1212 00:38:01.575868  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:38:01.575874  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:38:01.575933  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:38:01.601329  530956 cri.go:89] found id: ""
	I1212 00:38:01.601342  530956 logs.go:282] 0 containers: []
	W1212 00:38:01.601350  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:38:01.601358  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:38:01.601369  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:38:01.617336  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:38:01.617351  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:38:01.681650  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:38:01.672976   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:01.673795   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:01.675461   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:01.675793   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:01.677308   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:38:01.672976   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:01.673795   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:01.675461   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:01.675793   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:01.677308   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:38:01.681659  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:38:01.681669  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:38:01.753959  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:38:01.753987  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:38:01.784884  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:38:01.784901  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:38:04.352224  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:38:04.362582  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:38:04.362651  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:38:04.387422  530956 cri.go:89] found id: ""
	I1212 00:38:04.387436  530956 logs.go:282] 0 containers: []
	W1212 00:38:04.387443  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:38:04.387448  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:38:04.387515  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:38:04.416278  530956 cri.go:89] found id: ""
	I1212 00:38:04.416292  530956 logs.go:282] 0 containers: []
	W1212 00:38:04.416298  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:38:04.416304  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:38:04.416360  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:38:04.445370  530956 cri.go:89] found id: ""
	I1212 00:38:04.445384  530956 logs.go:282] 0 containers: []
	W1212 00:38:04.445391  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:38:04.445397  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:38:04.445455  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:38:04.482755  530956 cri.go:89] found id: ""
	I1212 00:38:04.482768  530956 logs.go:282] 0 containers: []
	W1212 00:38:04.482783  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:38:04.482789  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:38:04.482857  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:38:04.509091  530956 cri.go:89] found id: ""
	I1212 00:38:04.509105  530956 logs.go:282] 0 containers: []
	W1212 00:38:04.509120  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:38:04.509126  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:38:04.509194  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:38:04.539958  530956 cri.go:89] found id: ""
	I1212 00:38:04.539980  530956 logs.go:282] 0 containers: []
	W1212 00:38:04.539987  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:38:04.539995  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:38:04.540053  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:38:04.565072  530956 cri.go:89] found id: ""
	I1212 00:38:04.565085  530956 logs.go:282] 0 containers: []
	W1212 00:38:04.565092  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:38:04.565100  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:38:04.565110  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:38:04.632823  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:38:04.632844  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:38:04.659747  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:38:04.659763  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:38:04.726963  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:38:04.726980  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:38:04.742446  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:38:04.742462  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:38:04.811712  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:38:04.802381   13076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:04.803275   13076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:04.805003   13076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:04.805618   13076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:04.807784   13076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:38:04.802381   13076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:04.803275   13076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:04.805003   13076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:04.805618   13076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:04.807784   13076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:38:07.313373  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:38:07.323395  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:38:07.323461  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:38:07.349092  530956 cri.go:89] found id: ""
	I1212 00:38:07.349106  530956 logs.go:282] 0 containers: []
	W1212 00:38:07.349114  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:38:07.349119  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:38:07.349178  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:38:07.374733  530956 cri.go:89] found id: ""
	I1212 00:38:07.374747  530956 logs.go:282] 0 containers: []
	W1212 00:38:07.374754  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:38:07.374759  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:38:07.374826  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:38:07.399425  530956 cri.go:89] found id: ""
	I1212 00:38:07.399439  530956 logs.go:282] 0 containers: []
	W1212 00:38:07.399446  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:38:07.399450  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:38:07.399509  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:38:07.423784  530956 cri.go:89] found id: ""
	I1212 00:38:07.423798  530956 logs.go:282] 0 containers: []
	W1212 00:38:07.423805  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:38:07.423809  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:38:07.423866  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:38:07.449601  530956 cri.go:89] found id: ""
	I1212 00:38:07.449615  530956 logs.go:282] 0 containers: []
	W1212 00:38:07.449622  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:38:07.449627  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:38:07.449687  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:38:07.483778  530956 cri.go:89] found id: ""
	I1212 00:38:07.483793  530956 logs.go:282] 0 containers: []
	W1212 00:38:07.483800  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:38:07.483805  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:38:07.483863  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:38:07.514105  530956 cri.go:89] found id: ""
	I1212 00:38:07.514118  530956 logs.go:282] 0 containers: []
	W1212 00:38:07.514126  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:38:07.514135  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:38:07.514144  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:38:07.584461  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:38:07.584483  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:38:07.599076  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:38:07.599092  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:38:07.662502  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:38:07.654255   13169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:07.655086   13169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:07.656662   13169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:07.656958   13169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:07.658502   13169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:38:07.654255   13169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:07.655086   13169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:07.656662   13169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:07.656958   13169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:07.658502   13169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:38:07.662512  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:38:07.662524  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:38:07.730514  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:38:07.730532  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:38:10.261580  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:38:10.271806  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:38:10.271866  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:38:10.301488  530956 cri.go:89] found id: ""
	I1212 00:38:10.301509  530956 logs.go:282] 0 containers: []
	W1212 00:38:10.301517  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:38:10.301522  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:38:10.301586  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:38:10.328569  530956 cri.go:89] found id: ""
	I1212 00:38:10.328582  530956 logs.go:282] 0 containers: []
	W1212 00:38:10.328589  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:38:10.328594  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:38:10.328651  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:38:10.352390  530956 cri.go:89] found id: ""
	I1212 00:38:10.352404  530956 logs.go:282] 0 containers: []
	W1212 00:38:10.352411  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:38:10.352416  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:38:10.352476  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:38:10.376595  530956 cri.go:89] found id: ""
	I1212 00:38:10.376608  530956 logs.go:282] 0 containers: []
	W1212 00:38:10.376615  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:38:10.376620  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:38:10.376676  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:38:10.401114  530956 cri.go:89] found id: ""
	I1212 00:38:10.401129  530956 logs.go:282] 0 containers: []
	W1212 00:38:10.401136  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:38:10.401141  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:38:10.401202  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:38:10.426633  530956 cri.go:89] found id: ""
	I1212 00:38:10.426647  530956 logs.go:282] 0 containers: []
	W1212 00:38:10.426654  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:38:10.426659  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:38:10.426740  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:38:10.452233  530956 cri.go:89] found id: ""
	I1212 00:38:10.452246  530956 logs.go:282] 0 containers: []
	W1212 00:38:10.452254  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:38:10.452262  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:38:10.452272  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:38:10.521036  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:38:10.521055  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:38:10.535759  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:38:10.535774  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:38:10.601793  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:38:10.593515   13275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:10.594074   13275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:10.595582   13275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:10.596077   13275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:10.597523   13275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:38:10.593515   13275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:10.594074   13275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:10.595582   13275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:10.596077   13275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:10.597523   13275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:38:10.601803  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:38:10.601813  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:38:10.672541  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:38:10.672560  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:38:13.203975  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:38:13.213736  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:38:13.213796  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:38:13.238219  530956 cri.go:89] found id: ""
	I1212 00:38:13.238234  530956 logs.go:282] 0 containers: []
	W1212 00:38:13.238241  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:38:13.238246  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:38:13.238303  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:38:13.262428  530956 cri.go:89] found id: ""
	I1212 00:38:13.262441  530956 logs.go:282] 0 containers: []
	W1212 00:38:13.262449  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:38:13.262454  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:38:13.262518  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:38:13.287118  530956 cri.go:89] found id: ""
	I1212 00:38:13.287132  530956 logs.go:282] 0 containers: []
	W1212 00:38:13.287139  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:38:13.287144  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:38:13.287201  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:38:13.316471  530956 cri.go:89] found id: ""
	I1212 00:38:13.316485  530956 logs.go:282] 0 containers: []
	W1212 00:38:13.316492  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:38:13.316497  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:38:13.316554  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:38:13.340630  530956 cri.go:89] found id: ""
	I1212 00:38:13.340644  530956 logs.go:282] 0 containers: []
	W1212 00:38:13.340651  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:38:13.340656  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:38:13.340719  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:38:13.365167  530956 cri.go:89] found id: ""
	I1212 00:38:13.365180  530956 logs.go:282] 0 containers: []
	W1212 00:38:13.365187  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:38:13.365192  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:38:13.365249  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:38:13.393786  530956 cri.go:89] found id: ""
	I1212 00:38:13.393800  530956 logs.go:282] 0 containers: []
	W1212 00:38:13.393806  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:38:13.393813  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:38:13.393824  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:38:13.460497  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:38:13.460517  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:38:13.484321  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:38:13.484350  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:38:13.564959  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:38:13.556484   13380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:13.557156   13380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:13.558914   13380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:13.559521   13380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:13.561122   13380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:38:13.556484   13380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:13.557156   13380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:13.558914   13380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:13.559521   13380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:13.561122   13380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:38:13.564970  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:38:13.564991  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:38:13.633622  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:38:13.633641  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:38:16.165859  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:38:16.179076  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:38:16.179137  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:38:16.204832  530956 cri.go:89] found id: ""
	I1212 00:38:16.204846  530956 logs.go:282] 0 containers: []
	W1212 00:38:16.204853  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:38:16.204858  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:38:16.204929  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:38:16.230899  530956 cri.go:89] found id: ""
	I1212 00:38:16.230912  530956 logs.go:282] 0 containers: []
	W1212 00:38:16.230920  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:38:16.230924  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:38:16.230985  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:38:16.260492  530956 cri.go:89] found id: ""
	I1212 00:38:16.260505  530956 logs.go:282] 0 containers: []
	W1212 00:38:16.260513  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:38:16.260518  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:38:16.260582  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:38:16.285639  530956 cri.go:89] found id: ""
	I1212 00:38:16.285652  530956 logs.go:282] 0 containers: []
	W1212 00:38:16.285660  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:38:16.285665  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:38:16.285724  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:38:16.311240  530956 cri.go:89] found id: ""
	I1212 00:38:16.311253  530956 logs.go:282] 0 containers: []
	W1212 00:38:16.311261  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:38:16.311266  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:38:16.311331  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:38:16.337039  530956 cri.go:89] found id: ""
	I1212 00:38:16.337053  530956 logs.go:282] 0 containers: []
	W1212 00:38:16.337060  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:38:16.337065  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:38:16.337132  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:38:16.363033  530956 cri.go:89] found id: ""
	I1212 00:38:16.363047  530956 logs.go:282] 0 containers: []
	W1212 00:38:16.363053  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:38:16.363061  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:38:16.363072  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:38:16.393154  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:38:16.393171  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:38:16.460499  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:38:16.460516  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:38:16.475666  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:38:16.475681  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:38:16.550358  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:38:16.542066   13498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:16.542789   13498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:16.544568   13498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:16.545193   13498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:16.546806   13498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:38:16.542066   13498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:16.542789   13498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:16.544568   13498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:16.545193   13498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:16.546806   13498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:38:16.550367  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:38:16.550378  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:38:19.117450  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:38:19.129437  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:38:19.129500  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:38:19.153970  530956 cri.go:89] found id: ""
	I1212 00:38:19.153983  530956 logs.go:282] 0 containers: []
	W1212 00:38:19.153990  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:38:19.153995  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:38:19.154052  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:38:19.179294  530956 cri.go:89] found id: ""
	I1212 00:38:19.179307  530956 logs.go:282] 0 containers: []
	W1212 00:38:19.179314  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:38:19.179319  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:38:19.179381  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:38:19.205071  530956 cri.go:89] found id: ""
	I1212 00:38:19.205091  530956 logs.go:282] 0 containers: []
	W1212 00:38:19.205098  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:38:19.205103  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:38:19.205168  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:38:19.230084  530956 cri.go:89] found id: ""
	I1212 00:38:19.230098  530956 logs.go:282] 0 containers: []
	W1212 00:38:19.230111  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:38:19.230118  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:38:19.230181  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:38:19.255464  530956 cri.go:89] found id: ""
	I1212 00:38:19.255477  530956 logs.go:282] 0 containers: []
	W1212 00:38:19.255485  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:38:19.255490  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:38:19.255549  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:38:19.285389  530956 cri.go:89] found id: ""
	I1212 00:38:19.285402  530956 logs.go:282] 0 containers: []
	W1212 00:38:19.285409  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:38:19.285415  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:38:19.285472  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:38:19.312947  530956 cri.go:89] found id: ""
	I1212 00:38:19.312960  530956 logs.go:282] 0 containers: []
	W1212 00:38:19.312967  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:38:19.312975  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:38:19.312985  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:38:19.350894  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:38:19.350911  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:38:19.417923  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:38:19.417945  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:38:19.432429  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:38:19.432445  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:38:19.505932  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:38:19.498121   13597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:19.498890   13597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:19.500411   13597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:19.500702   13597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:19.502118   13597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:38:19.498121   13597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:19.498890   13597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:19.500411   13597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:19.500702   13597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:19.502118   13597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:38:19.505942  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:38:19.505964  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:38:22.083196  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:38:22.093637  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:38:22.093699  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:38:22.118550  530956 cri.go:89] found id: ""
	I1212 00:38:22.118565  530956 logs.go:282] 0 containers: []
	W1212 00:38:22.118572  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:38:22.118578  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:38:22.118636  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:38:22.145134  530956 cri.go:89] found id: ""
	I1212 00:38:22.145147  530956 logs.go:282] 0 containers: []
	W1212 00:38:22.145155  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:38:22.145159  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:38:22.145217  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:38:22.170293  530956 cri.go:89] found id: ""
	I1212 00:38:22.170306  530956 logs.go:282] 0 containers: []
	W1212 00:38:22.170313  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:38:22.170318  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:38:22.170386  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:38:22.197536  530956 cri.go:89] found id: ""
	I1212 00:38:22.197550  530956 logs.go:282] 0 containers: []
	W1212 00:38:22.197571  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:38:22.197576  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:38:22.197642  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:38:22.222476  530956 cri.go:89] found id: ""
	I1212 00:38:22.222490  530956 logs.go:282] 0 containers: []
	W1212 00:38:22.222497  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:38:22.222502  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:38:22.222560  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:38:22.247759  530956 cri.go:89] found id: ""
	I1212 00:38:22.247779  530956 logs.go:282] 0 containers: []
	W1212 00:38:22.247792  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:38:22.247797  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:38:22.247865  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:38:22.278000  530956 cri.go:89] found id: ""
	I1212 00:38:22.278022  530956 logs.go:282] 0 containers: []
	W1212 00:38:22.278030  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:38:22.278037  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:38:22.278047  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:38:22.306112  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:38:22.306127  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:38:22.377647  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:38:22.377675  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:38:22.394490  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:38:22.394506  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:38:22.462988  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:38:22.454404   13702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:22.455058   13702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:22.456732   13702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:22.457164   13702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:22.458875   13702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:38:22.454404   13702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:22.455058   13702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:22.456732   13702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:22.457164   13702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:22.458875   13702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:38:22.462999  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:38:22.463010  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:38:25.044675  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:38:25.054532  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:38:25.054592  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:38:25.080041  530956 cri.go:89] found id: ""
	I1212 00:38:25.080055  530956 logs.go:282] 0 containers: []
	W1212 00:38:25.080062  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:38:25.080068  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:38:25.080129  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:38:25.105941  530956 cri.go:89] found id: ""
	I1212 00:38:25.105957  530956 logs.go:282] 0 containers: []
	W1212 00:38:25.105965  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:38:25.105971  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:38:25.106038  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:38:25.136063  530956 cri.go:89] found id: ""
	I1212 00:38:25.136078  530956 logs.go:282] 0 containers: []
	W1212 00:38:25.136086  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:38:25.136096  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:38:25.136159  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:38:25.161125  530956 cri.go:89] found id: ""
	I1212 00:38:25.161140  530956 logs.go:282] 0 containers: []
	W1212 00:38:25.161147  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:38:25.161153  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:38:25.161212  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:38:25.187318  530956 cri.go:89] found id: ""
	I1212 00:38:25.187333  530956 logs.go:282] 0 containers: []
	W1212 00:38:25.187340  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:38:25.187345  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:38:25.187407  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:38:25.213505  530956 cri.go:89] found id: ""
	I1212 00:38:25.213519  530956 logs.go:282] 0 containers: []
	W1212 00:38:25.213528  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:38:25.213533  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:38:25.213593  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:38:25.238804  530956 cri.go:89] found id: ""
	I1212 00:38:25.238818  530956 logs.go:282] 0 containers: []
	W1212 00:38:25.238825  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:38:25.238833  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:38:25.238845  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:38:25.253570  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:38:25.253586  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:38:25.319774  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:38:25.310440   13795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:25.311167   13795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:25.312774   13795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:25.313270   13795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:25.315248   13795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:38:25.310440   13795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:25.311167   13795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:25.312774   13795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:25.313270   13795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:25.315248   13795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:38:25.319800  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:38:25.319811  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:38:25.392356  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:38:25.392375  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:38:25.422668  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:38:25.422706  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:38:27.990024  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:38:28.003363  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:38:28.003444  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:38:28.033003  530956 cri.go:89] found id: ""
	I1212 00:38:28.033017  530956 logs.go:282] 0 containers: []
	W1212 00:38:28.033024  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:38:28.033029  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:38:28.033090  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:38:28.059854  530956 cri.go:89] found id: ""
	I1212 00:38:28.059869  530956 logs.go:282] 0 containers: []
	W1212 00:38:28.059876  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:38:28.059881  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:38:28.059946  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:38:28.085318  530956 cri.go:89] found id: ""
	I1212 00:38:28.085332  530956 logs.go:282] 0 containers: []
	W1212 00:38:28.085339  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:38:28.085349  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:38:28.085408  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:38:28.111377  530956 cri.go:89] found id: ""
	I1212 00:38:28.111390  530956 logs.go:282] 0 containers: []
	W1212 00:38:28.111397  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:38:28.111403  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:38:28.111464  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:38:28.140880  530956 cri.go:89] found id: ""
	I1212 00:38:28.140894  530956 logs.go:282] 0 containers: []
	W1212 00:38:28.140910  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:38:28.140915  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:38:28.140985  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:38:28.166928  530956 cri.go:89] found id: ""
	I1212 00:38:28.166943  530956 logs.go:282] 0 containers: []
	W1212 00:38:28.166950  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:38:28.166955  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:38:28.167013  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:38:28.193116  530956 cri.go:89] found id: ""
	I1212 00:38:28.193129  530956 logs.go:282] 0 containers: []
	W1212 00:38:28.193136  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:38:28.193144  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:38:28.193157  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:38:28.207536  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:38:28.207551  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:38:28.273869  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:38:28.265632   13898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:28.266161   13898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:28.267761   13898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:28.268328   13898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:28.269940   13898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:38:28.265632   13898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:28.266161   13898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:28.267761   13898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:28.268328   13898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:28.269940   13898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:38:28.273878  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:38:28.273888  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:38:28.341616  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:38:28.341634  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:38:28.370270  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:38:28.370286  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:38:30.938812  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:38:30.948944  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:38:30.949000  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:38:30.977305  530956 cri.go:89] found id: ""
	I1212 00:38:30.977320  530956 logs.go:282] 0 containers: []
	W1212 00:38:30.977327  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:38:30.977333  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:38:30.977393  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:38:31.004773  530956 cri.go:89] found id: ""
	I1212 00:38:31.004793  530956 logs.go:282] 0 containers: []
	W1212 00:38:31.004802  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:38:31.004807  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:38:31.004878  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:38:31.034217  530956 cri.go:89] found id: ""
	I1212 00:38:31.034231  530956 logs.go:282] 0 containers: []
	W1212 00:38:31.034238  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:38:31.034243  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:38:31.034299  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:38:31.059299  530956 cri.go:89] found id: ""
	I1212 00:38:31.059313  530956 logs.go:282] 0 containers: []
	W1212 00:38:31.059320  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:38:31.059325  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:38:31.059389  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:38:31.085777  530956 cri.go:89] found id: ""
	I1212 00:38:31.085794  530956 logs.go:282] 0 containers: []
	W1212 00:38:31.085801  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:38:31.085806  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:38:31.085870  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:38:31.113432  530956 cri.go:89] found id: ""
	I1212 00:38:31.113445  530956 logs.go:282] 0 containers: []
	W1212 00:38:31.113453  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:38:31.113458  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:38:31.113517  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:38:31.140290  530956 cri.go:89] found id: ""
	I1212 00:38:31.140303  530956 logs.go:282] 0 containers: []
	W1212 00:38:31.140310  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:38:31.140318  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:38:31.140329  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:38:31.170079  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:38:31.170095  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:38:31.237344  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:38:31.237366  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:38:31.252705  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:38:31.252722  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:38:31.314201  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:38:31.305835   14019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:31.306554   14019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:31.308300   14019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:31.308814   14019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:31.310412   14019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:38:31.305835   14019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:31.306554   14019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:31.308300   14019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:31.308814   14019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:31.310412   14019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:38:31.314211  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:38:31.314222  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:38:33.887992  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:38:33.897911  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:38:33.897978  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:38:33.922473  530956 cri.go:89] found id: ""
	I1212 00:38:33.922487  530956 logs.go:282] 0 containers: []
	W1212 00:38:33.922494  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:38:33.922499  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:38:33.922556  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:38:33.947695  530956 cri.go:89] found id: ""
	I1212 00:38:33.947709  530956 logs.go:282] 0 containers: []
	W1212 00:38:33.947716  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:38:33.947720  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:38:33.947779  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:38:33.975167  530956 cri.go:89] found id: ""
	I1212 00:38:33.975181  530956 logs.go:282] 0 containers: []
	W1212 00:38:33.975188  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:38:33.975194  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:38:33.975256  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:38:33.999707  530956 cri.go:89] found id: ""
	I1212 00:38:33.999722  530956 logs.go:282] 0 containers: []
	W1212 00:38:33.999731  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:38:33.999736  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:38:33.999806  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:38:34.028202  530956 cri.go:89] found id: ""
	I1212 00:38:34.028216  530956 logs.go:282] 0 containers: []
	W1212 00:38:34.028224  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:38:34.028229  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:38:34.028289  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:38:34.053144  530956 cri.go:89] found id: ""
	I1212 00:38:34.053158  530956 logs.go:282] 0 containers: []
	W1212 00:38:34.053169  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:38:34.053175  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:38:34.053239  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:38:34.080035  530956 cri.go:89] found id: ""
	I1212 00:38:34.080050  530956 logs.go:282] 0 containers: []
	W1212 00:38:34.080058  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:38:34.080066  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:38:34.080076  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:38:34.146175  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:38:34.146192  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:38:34.160652  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:38:34.160668  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:38:34.223173  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:38:34.215210   14114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:34.216058   14114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:34.217521   14114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:34.217956   14114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:34.219415   14114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:38:34.215210   14114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:34.216058   14114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:34.217521   14114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:34.217956   14114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:34.219415   14114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:38:34.223184  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:38:34.223194  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:38:34.292571  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:38:34.292590  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:38:36.820393  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:38:36.830345  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:38:36.830406  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:38:36.854187  530956 cri.go:89] found id: ""
	I1212 00:38:36.854201  530956 logs.go:282] 0 containers: []
	W1212 00:38:36.854208  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:38:36.854213  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:38:36.854268  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:38:36.882747  530956 cri.go:89] found id: ""
	I1212 00:38:36.882767  530956 logs.go:282] 0 containers: []
	W1212 00:38:36.882774  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:38:36.882779  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:38:36.882836  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:38:36.909295  530956 cri.go:89] found id: ""
	I1212 00:38:36.909310  530956 logs.go:282] 0 containers: []
	W1212 00:38:36.909317  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:38:36.909321  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:38:36.909380  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:38:36.939718  530956 cri.go:89] found id: ""
	I1212 00:38:36.939732  530956 logs.go:282] 0 containers: []
	W1212 00:38:36.939739  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:38:36.939745  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:38:36.939805  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:38:36.985049  530956 cri.go:89] found id: ""
	I1212 00:38:36.985063  530956 logs.go:282] 0 containers: []
	W1212 00:38:36.985070  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:38:36.985075  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:38:36.985135  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:38:37.018069  530956 cri.go:89] found id: ""
	I1212 00:38:37.018092  530956 logs.go:282] 0 containers: []
	W1212 00:38:37.018101  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:38:37.018107  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:38:37.018197  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:38:37.045321  530956 cri.go:89] found id: ""
	I1212 00:38:37.045335  530956 logs.go:282] 0 containers: []
	W1212 00:38:37.045342  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:38:37.045349  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:38:37.045366  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:38:37.110695  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:38:37.110716  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:38:37.125484  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:38:37.125500  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:38:37.191768  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:38:37.183160   14219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:37.184307   14219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:37.184933   14219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:37.186530   14219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:37.186898   14219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:38:37.183160   14219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:37.184307   14219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:37.184933   14219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:37.186530   14219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:37.186898   14219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:38:37.191778  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:38:37.191789  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:38:37.258979  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:38:37.258998  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:38:39.789133  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:38:39.799919  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:38:39.799985  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:38:39.825459  530956 cri.go:89] found id: ""
	I1212 00:38:39.825473  530956 logs.go:282] 0 containers: []
	W1212 00:38:39.825481  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:38:39.825487  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:38:39.825550  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:38:39.853725  530956 cri.go:89] found id: ""
	I1212 00:38:39.853741  530956 logs.go:282] 0 containers: []
	W1212 00:38:39.853750  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:38:39.853757  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:38:39.853833  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:38:39.879329  530956 cri.go:89] found id: ""
	I1212 00:38:39.879343  530956 logs.go:282] 0 containers: []
	W1212 00:38:39.879350  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:38:39.879355  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:38:39.879417  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:38:39.910098  530956 cri.go:89] found id: ""
	I1212 00:38:39.910111  530956 logs.go:282] 0 containers: []
	W1212 00:38:39.910118  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:38:39.910124  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:38:39.910184  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:38:39.940693  530956 cri.go:89] found id: ""
	I1212 00:38:39.940707  530956 logs.go:282] 0 containers: []
	W1212 00:38:39.940714  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:38:39.940719  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:38:39.940779  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:38:39.967072  530956 cri.go:89] found id: ""
	I1212 00:38:39.967085  530956 logs.go:282] 0 containers: []
	W1212 00:38:39.967093  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:38:39.967099  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:38:39.967165  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:38:39.992659  530956 cri.go:89] found id: ""
	I1212 00:38:39.992672  530956 logs.go:282] 0 containers: []
	W1212 00:38:39.992680  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:38:39.992687  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:38:39.992697  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:38:40.113165  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:38:40.113185  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:38:40.130134  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:38:40.130150  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:38:40.200442  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:38:40.191596   14326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:40.192392   14326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:40.194267   14326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:40.194628   14326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:40.196363   14326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:38:40.191596   14326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:40.192392   14326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:40.194267   14326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:40.194628   14326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:40.196363   14326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:38:40.200453  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:38:40.200463  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:38:40.271707  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:38:40.271728  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:38:42.801953  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:38:42.811892  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:38:42.811958  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:38:42.841306  530956 cri.go:89] found id: ""
	I1212 00:38:42.841320  530956 logs.go:282] 0 containers: []
	W1212 00:38:42.841328  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:38:42.841334  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:38:42.841395  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:38:42.869294  530956 cri.go:89] found id: ""
	I1212 00:38:42.869308  530956 logs.go:282] 0 containers: []
	W1212 00:38:42.869314  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:38:42.869319  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:38:42.869381  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:38:42.898367  530956 cri.go:89] found id: ""
	I1212 00:38:42.898381  530956 logs.go:282] 0 containers: []
	W1212 00:38:42.898388  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:38:42.898393  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:38:42.898454  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:38:42.925039  530956 cri.go:89] found id: ""
	I1212 00:38:42.925052  530956 logs.go:282] 0 containers: []
	W1212 00:38:42.925059  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:38:42.925065  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:38:42.925125  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:38:42.955313  530956 cri.go:89] found id: ""
	I1212 00:38:42.955327  530956 logs.go:282] 0 containers: []
	W1212 00:38:42.955334  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:38:42.955339  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:38:42.955404  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:38:42.979722  530956 cri.go:89] found id: ""
	I1212 00:38:42.979735  530956 logs.go:282] 0 containers: []
	W1212 00:38:42.979742  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:38:42.979747  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:38:42.979808  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:38:43.027955  530956 cri.go:89] found id: ""
	I1212 00:38:43.027969  530956 logs.go:282] 0 containers: []
	W1212 00:38:43.027976  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:38:43.027983  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:38:43.027996  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:38:43.043222  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:38:43.043240  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:38:43.111269  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:38:43.102010   14428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:43.103597   14428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:43.104461   14428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:43.105967   14428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:43.106269   14428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:38:43.102010   14428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:43.103597   14428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:43.104461   14428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:43.105967   14428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:43.106269   14428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:38:43.111321  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:38:43.111331  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:38:43.177977  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:38:43.177997  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:38:43.206880  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:38:43.206895  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:38:45.775312  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:38:45.785672  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:38:45.785736  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:38:45.811375  530956 cri.go:89] found id: ""
	I1212 00:38:45.811389  530956 logs.go:282] 0 containers: []
	W1212 00:38:45.811396  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:38:45.811400  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:38:45.811459  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:38:45.836941  530956 cri.go:89] found id: ""
	I1212 00:38:45.836956  530956 logs.go:282] 0 containers: []
	W1212 00:38:45.836963  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:38:45.836968  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:38:45.837031  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:38:45.863375  530956 cri.go:89] found id: ""
	I1212 00:38:45.863389  530956 logs.go:282] 0 containers: []
	W1212 00:38:45.863396  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:38:45.863402  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:38:45.863461  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:38:45.888628  530956 cri.go:89] found id: ""
	I1212 00:38:45.888641  530956 logs.go:282] 0 containers: []
	W1212 00:38:45.888648  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:38:45.888654  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:38:45.888712  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:38:45.917199  530956 cri.go:89] found id: ""
	I1212 00:38:45.917213  530956 logs.go:282] 0 containers: []
	W1212 00:38:45.917221  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:38:45.917226  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:38:45.917289  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:38:45.944008  530956 cri.go:89] found id: ""
	I1212 00:38:45.944022  530956 logs.go:282] 0 containers: []
	W1212 00:38:45.944029  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:38:45.944034  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:38:45.944093  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:38:45.968971  530956 cri.go:89] found id: ""
	I1212 00:38:45.968984  530956 logs.go:282] 0 containers: []
	W1212 00:38:45.968992  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:38:45.969000  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:38:45.969010  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:38:46.034356  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:38:46.034375  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:38:46.048756  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:38:46.048771  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:38:46.115073  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:38:46.106286   14535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:46.107154   14535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:46.108746   14535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:46.109165   14535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:46.110638   14535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:38:46.106286   14535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:46.107154   14535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:46.108746   14535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:46.109165   14535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:46.110638   14535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:38:46.115096  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:38:46.115107  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:38:46.182387  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:38:46.182407  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:38:48.712482  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:38:48.722635  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:38:48.722715  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:38:48.752202  530956 cri.go:89] found id: ""
	I1212 00:38:48.752215  530956 logs.go:282] 0 containers: []
	W1212 00:38:48.752222  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:38:48.752227  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:38:48.752287  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:38:48.779084  530956 cri.go:89] found id: ""
	I1212 00:38:48.779097  530956 logs.go:282] 0 containers: []
	W1212 00:38:48.779105  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:38:48.779110  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:38:48.779165  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:38:48.803352  530956 cri.go:89] found id: ""
	I1212 00:38:48.803366  530956 logs.go:282] 0 containers: []
	W1212 00:38:48.803375  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:38:48.803380  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:38:48.803441  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:38:48.829635  530956 cri.go:89] found id: ""
	I1212 00:38:48.829649  530956 logs.go:282] 0 containers: []
	W1212 00:38:48.829656  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:38:48.829661  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:38:48.829720  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:38:48.854311  530956 cri.go:89] found id: ""
	I1212 00:38:48.854324  530956 logs.go:282] 0 containers: []
	W1212 00:38:48.854332  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:38:48.854337  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:38:48.854394  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:38:48.879369  530956 cri.go:89] found id: ""
	I1212 00:38:48.879383  530956 logs.go:282] 0 containers: []
	W1212 00:38:48.879390  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:38:48.879396  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:38:48.879456  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:38:48.908110  530956 cri.go:89] found id: ""
	I1212 00:38:48.908124  530956 logs.go:282] 0 containers: []
	W1212 00:38:48.908131  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:38:48.908138  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:38:48.908151  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:38:48.972035  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:38:48.972053  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:38:48.986646  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:38:48.986668  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:38:49.053589  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:38:49.045696   14637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:49.046251   14637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:49.047754   14637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:49.048291   14637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:49.049804   14637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:38:49.045696   14637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:49.046251   14637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:49.047754   14637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:49.048291   14637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:49.049804   14637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:38:49.053599  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:38:49.053608  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:38:49.123212  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:38:49.123236  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:38:51.651584  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:38:51.662032  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:38:51.662096  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:38:51.687559  530956 cri.go:89] found id: ""
	I1212 00:38:51.687573  530956 logs.go:282] 0 containers: []
	W1212 00:38:51.687580  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:38:51.687586  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:38:51.687655  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:38:51.713801  530956 cri.go:89] found id: ""
	I1212 00:38:51.713828  530956 logs.go:282] 0 containers: []
	W1212 00:38:51.713835  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:38:51.713840  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:38:51.713903  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:38:51.748993  530956 cri.go:89] found id: ""
	I1212 00:38:51.749006  530956 logs.go:282] 0 containers: []
	W1212 00:38:51.749028  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:38:51.749034  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:38:51.749091  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:38:51.777108  530956 cri.go:89] found id: ""
	I1212 00:38:51.777122  530956 logs.go:282] 0 containers: []
	W1212 00:38:51.777129  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:38:51.777135  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:38:51.777200  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:38:51.805174  530956 cri.go:89] found id: ""
	I1212 00:38:51.805188  530956 logs.go:282] 0 containers: []
	W1212 00:38:51.805195  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:38:51.805201  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:38:51.805266  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:38:51.830660  530956 cri.go:89] found id: ""
	I1212 00:38:51.830674  530956 logs.go:282] 0 containers: []
	W1212 00:38:51.830701  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:38:51.830706  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:38:51.830778  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:38:51.855989  530956 cri.go:89] found id: ""
	I1212 00:38:51.856003  530956 logs.go:282] 0 containers: []
	W1212 00:38:51.856017  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:38:51.856024  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:38:51.856035  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:38:51.887241  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:38:51.887257  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:38:51.953055  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:38:51.953075  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:38:51.969638  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:38:51.969660  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:38:52.045683  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:38:52.037116   14756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:52.037541   14756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:52.039334   14756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:52.039776   14756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:52.041452   14756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:38:52.037116   14756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:52.037541   14756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:52.039334   14756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:52.039776   14756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:52.041452   14756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:38:52.045694  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:38:52.045705  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:38:54.617323  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:38:54.627443  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:38:54.627502  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:38:54.651505  530956 cri.go:89] found id: ""
	I1212 00:38:54.651519  530956 logs.go:282] 0 containers: []
	W1212 00:38:54.651526  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:38:54.651532  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:38:54.651589  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:38:54.675935  530956 cri.go:89] found id: ""
	I1212 00:38:54.675961  530956 logs.go:282] 0 containers: []
	W1212 00:38:54.675968  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:38:54.675973  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:38:54.676042  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:38:54.701954  530956 cri.go:89] found id: ""
	I1212 00:38:54.701970  530956 logs.go:282] 0 containers: []
	W1212 00:38:54.701979  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:38:54.701986  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:38:54.702056  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:38:54.733636  530956 cri.go:89] found id: ""
	I1212 00:38:54.733657  530956 logs.go:282] 0 containers: []
	W1212 00:38:54.733666  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:38:54.733671  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:38:54.733742  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:38:54.761858  530956 cri.go:89] found id: ""
	I1212 00:38:54.761885  530956 logs.go:282] 0 containers: []
	W1212 00:38:54.761892  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:38:54.761897  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:38:54.761965  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:38:54.798397  530956 cri.go:89] found id: ""
	I1212 00:38:54.798411  530956 logs.go:282] 0 containers: []
	W1212 00:38:54.798431  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:38:54.798436  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:38:54.798502  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:38:54.823810  530956 cri.go:89] found id: ""
	I1212 00:38:54.823824  530956 logs.go:282] 0 containers: []
	W1212 00:38:54.823831  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:38:54.823840  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:38:54.823850  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:38:54.891230  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:38:54.891249  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:38:54.907075  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:38:54.907092  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:38:54.979081  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:38:54.970178   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:54.971009   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:54.971760   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:54.973599   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:54.973900   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:38:54.970178   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:54.971009   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:54.971760   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:54.973599   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:54.973900   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:38:54.979091  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:38:54.979103  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:38:55.048465  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:38:55.048486  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:38:57.579400  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:38:57.590372  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:38:57.590435  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:38:57.618082  530956 cri.go:89] found id: ""
	I1212 00:38:57.618096  530956 logs.go:282] 0 containers: []
	W1212 00:38:57.618103  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:38:57.618108  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:38:57.618169  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:38:57.644801  530956 cri.go:89] found id: ""
	I1212 00:38:57.644815  530956 logs.go:282] 0 containers: []
	W1212 00:38:57.644822  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:38:57.644827  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:38:57.644886  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:38:57.670018  530956 cri.go:89] found id: ""
	I1212 00:38:57.670032  530956 logs.go:282] 0 containers: []
	W1212 00:38:57.670045  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:38:57.670050  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:38:57.670111  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:38:57.695026  530956 cri.go:89] found id: ""
	I1212 00:38:57.695040  530956 logs.go:282] 0 containers: []
	W1212 00:38:57.695047  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:38:57.695052  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:38:57.695116  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:38:57.726077  530956 cri.go:89] found id: ""
	I1212 00:38:57.726091  530956 logs.go:282] 0 containers: []
	W1212 00:38:57.726098  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:38:57.726103  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:38:57.726182  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:38:57.766280  530956 cri.go:89] found id: ""
	I1212 00:38:57.766295  530956 logs.go:282] 0 containers: []
	W1212 00:38:57.766302  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:38:57.766308  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:38:57.766366  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:38:57.794888  530956 cri.go:89] found id: ""
	I1212 00:38:57.794902  530956 logs.go:282] 0 containers: []
	W1212 00:38:57.794909  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:38:57.794917  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:38:57.794931  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:38:57.861092  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:38:57.861111  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:38:57.876214  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:38:57.876230  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:38:57.943746  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:38:57.933552   14955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:57.934412   14955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:57.936560   14955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:57.937552   14955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:57.938297   14955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:38:57.933552   14955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:57.934412   14955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:57.936560   14955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:57.937552   14955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:57.938297   14955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:38:57.943757  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:38:57.943767  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:38:58.013702  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:38:58.013722  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:39:00.543612  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:39:00.553735  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:39:00.553795  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:39:00.580386  530956 cri.go:89] found id: ""
	I1212 00:39:00.580400  530956 logs.go:282] 0 containers: []
	W1212 00:39:00.580407  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:39:00.580412  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:39:00.580471  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:39:00.608511  530956 cri.go:89] found id: ""
	I1212 00:39:00.608525  530956 logs.go:282] 0 containers: []
	W1212 00:39:00.608532  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:39:00.608537  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:39:00.608594  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:39:00.633613  530956 cri.go:89] found id: ""
	I1212 00:39:00.633627  530956 logs.go:282] 0 containers: []
	W1212 00:39:00.633634  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:39:00.633639  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:39:00.633696  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:39:00.658755  530956 cri.go:89] found id: ""
	I1212 00:39:00.658769  530956 logs.go:282] 0 containers: []
	W1212 00:39:00.658776  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:39:00.658782  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:39:00.658845  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:39:00.688160  530956 cri.go:89] found id: ""
	I1212 00:39:00.688174  530956 logs.go:282] 0 containers: []
	W1212 00:39:00.688181  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:39:00.688187  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:39:00.688246  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:39:00.714115  530956 cri.go:89] found id: ""
	I1212 00:39:00.714129  530956 logs.go:282] 0 containers: []
	W1212 00:39:00.714136  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:39:00.714142  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:39:00.714203  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:39:00.743594  530956 cri.go:89] found id: ""
	I1212 00:39:00.743607  530956 logs.go:282] 0 containers: []
	W1212 00:39:00.743614  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:39:00.743622  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:39:00.743632  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:39:00.825728  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:39:00.825750  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:39:00.840575  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:39:00.840590  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:39:00.904328  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:39:00.896372   15065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:00.896852   15065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:00.898503   15065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:00.898943   15065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:00.900532   15065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:39:00.896372   15065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:00.896852   15065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:00.898503   15065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:00.898943   15065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:00.900532   15065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:39:00.904339  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:39:00.904350  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:39:00.971157  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:39:00.971177  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:39:03.500568  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:39:03.510753  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:39:03.510824  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:39:03.535333  530956 cri.go:89] found id: ""
	I1212 00:39:03.535347  530956 logs.go:282] 0 containers: []
	W1212 00:39:03.535354  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:39:03.535359  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:39:03.535422  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:39:03.560575  530956 cri.go:89] found id: ""
	I1212 00:39:03.560589  530956 logs.go:282] 0 containers: []
	W1212 00:39:03.560597  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:39:03.560602  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:39:03.560659  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:39:03.589048  530956 cri.go:89] found id: ""
	I1212 00:39:03.589062  530956 logs.go:282] 0 containers: []
	W1212 00:39:03.589069  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:39:03.589075  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:39:03.589131  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:39:03.614812  530956 cri.go:89] found id: ""
	I1212 00:39:03.614826  530956 logs.go:282] 0 containers: []
	W1212 00:39:03.614834  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:39:03.614839  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:39:03.614908  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:39:03.641138  530956 cri.go:89] found id: ""
	I1212 00:39:03.641152  530956 logs.go:282] 0 containers: []
	W1212 00:39:03.641158  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:39:03.641164  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:39:03.641221  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:39:03.669855  530956 cri.go:89] found id: ""
	I1212 00:39:03.669869  530956 logs.go:282] 0 containers: []
	W1212 00:39:03.669876  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:39:03.669884  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:39:03.669943  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:39:03.694625  530956 cri.go:89] found id: ""
	I1212 00:39:03.694650  530956 logs.go:282] 0 containers: []
	W1212 00:39:03.694657  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:39:03.694665  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:39:03.694676  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:39:03.761872  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:39:03.761891  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:39:03.777581  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:39:03.777598  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:39:03.843774  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:39:03.835704   15173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:03.836242   15173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:03.838010   15173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:03.838382   15173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:03.839850   15173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:39:03.835704   15173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:03.836242   15173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:03.838010   15173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:03.838382   15173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:03.839850   15173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:39:03.843783  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:39:03.843793  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:39:03.914951  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:39:03.914977  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:39:06.443917  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:39:06.454370  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:39:06.454434  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:39:06.482109  530956 cri.go:89] found id: ""
	I1212 00:39:06.482123  530956 logs.go:282] 0 containers: []
	W1212 00:39:06.482131  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:39:06.482136  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:39:06.482199  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:39:06.509716  530956 cri.go:89] found id: ""
	I1212 00:39:06.509730  530956 logs.go:282] 0 containers: []
	W1212 00:39:06.509737  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:39:06.509742  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:39:06.509800  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:39:06.537521  530956 cri.go:89] found id: ""
	I1212 00:39:06.537535  530956 logs.go:282] 0 containers: []
	W1212 00:39:06.537542  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:39:06.537548  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:39:06.537606  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:39:06.562757  530956 cri.go:89] found id: ""
	I1212 00:39:06.562770  530956 logs.go:282] 0 containers: []
	W1212 00:39:06.562778  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:39:06.562783  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:39:06.562842  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:39:06.587417  530956 cri.go:89] found id: ""
	I1212 00:39:06.587431  530956 logs.go:282] 0 containers: []
	W1212 00:39:06.587439  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:39:06.587443  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:39:06.587507  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:39:06.612775  530956 cri.go:89] found id: ""
	I1212 00:39:06.612789  530956 logs.go:282] 0 containers: []
	W1212 00:39:06.612797  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:39:06.612804  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:39:06.612864  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:39:06.637360  530956 cri.go:89] found id: ""
	I1212 00:39:06.637374  530956 logs.go:282] 0 containers: []
	W1212 00:39:06.637382  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:39:06.637389  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:39:06.637400  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:39:06.651687  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:39:06.651703  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:39:06.714510  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:39:06.706351   15271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:06.706972   15271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:06.708728   15271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:06.709213   15271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:06.710650   15271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:39:06.706351   15271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:06.706972   15271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:06.708728   15271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:06.709213   15271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:06.710650   15271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:39:06.714521  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:39:06.714531  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:39:06.793242  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:39:06.793263  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:39:06.825153  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:39:06.825170  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:39:09.391589  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:39:09.401762  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:39:09.401823  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:39:09.426113  530956 cri.go:89] found id: ""
	I1212 00:39:09.426127  530956 logs.go:282] 0 containers: []
	W1212 00:39:09.426135  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:39:09.426139  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:39:09.426197  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:39:09.455495  530956 cri.go:89] found id: ""
	I1212 00:39:09.455509  530956 logs.go:282] 0 containers: []
	W1212 00:39:09.455522  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:39:09.455527  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:39:09.455586  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:39:09.484947  530956 cri.go:89] found id: ""
	I1212 00:39:09.484961  530956 logs.go:282] 0 containers: []
	W1212 00:39:09.484969  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:39:09.484975  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:39:09.485038  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:39:09.510850  530956 cri.go:89] found id: ""
	I1212 00:39:09.510865  530956 logs.go:282] 0 containers: []
	W1212 00:39:09.510873  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:39:09.510878  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:39:09.510936  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:39:09.536933  530956 cri.go:89] found id: ""
	I1212 00:39:09.536955  530956 logs.go:282] 0 containers: []
	W1212 00:39:09.536963  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:39:09.536968  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:39:09.537038  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:39:09.565308  530956 cri.go:89] found id: ""
	I1212 00:39:09.565321  530956 logs.go:282] 0 containers: []
	W1212 00:39:09.565328  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:39:09.565333  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:39:09.565391  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:39:09.596694  530956 cri.go:89] found id: ""
	I1212 00:39:09.596708  530956 logs.go:282] 0 containers: []
	W1212 00:39:09.596716  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:39:09.596724  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:39:09.596734  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:39:09.661768  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:39:09.661787  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:39:09.676496  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:39:09.676512  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:39:09.751036  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:39:09.740810   15379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:09.741527   15379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:09.743548   15379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:09.744409   15379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:09.746279   15379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:39:09.740810   15379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:09.741527   15379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:09.743548   15379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:09.744409   15379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:09.746279   15379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:39:09.751057  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:39:09.751069  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:39:09.831885  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:39:09.831905  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:39:12.361885  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:39:12.371912  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:39:12.371972  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:39:12.400852  530956 cri.go:89] found id: ""
	I1212 00:39:12.400867  530956 logs.go:282] 0 containers: []
	W1212 00:39:12.400874  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:39:12.400879  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:39:12.400939  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:39:12.426229  530956 cri.go:89] found id: ""
	I1212 00:39:12.426244  530956 logs.go:282] 0 containers: []
	W1212 00:39:12.426251  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:39:12.426256  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:39:12.426313  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:39:12.455450  530956 cri.go:89] found id: ""
	I1212 00:39:12.455465  530956 logs.go:282] 0 containers: []
	W1212 00:39:12.455472  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:39:12.455477  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:39:12.455542  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:39:12.480339  530956 cri.go:89] found id: ""
	I1212 00:39:12.480353  530956 logs.go:282] 0 containers: []
	W1212 00:39:12.480360  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:39:12.480365  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:39:12.480425  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:39:12.508098  530956 cri.go:89] found id: ""
	I1212 00:39:12.508112  530956 logs.go:282] 0 containers: []
	W1212 00:39:12.508119  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:39:12.508124  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:39:12.508185  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:39:12.534232  530956 cri.go:89] found id: ""
	I1212 00:39:12.534246  530956 logs.go:282] 0 containers: []
	W1212 00:39:12.534253  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:39:12.534259  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:39:12.534318  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:39:12.564030  530956 cri.go:89] found id: ""
	I1212 00:39:12.564045  530956 logs.go:282] 0 containers: []
	W1212 00:39:12.564053  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:39:12.564061  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:39:12.564072  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:39:12.578300  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:39:12.578315  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:39:12.645692  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:39:12.637241   15484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:12.637958   15484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:12.639484   15484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:12.639902   15484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:12.641511   15484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:39:12.637241   15484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:12.637958   15484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:12.639484   15484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:12.639902   15484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:12.641511   15484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:39:12.645702  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:39:12.645714  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:39:12.716817  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:39:12.716835  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:39:12.755607  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:39:12.755622  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:39:15.328461  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:39:15.338656  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:39:15.338747  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:39:15.368754  530956 cri.go:89] found id: ""
	I1212 00:39:15.368768  530956 logs.go:282] 0 containers: []
	W1212 00:39:15.368775  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:39:15.368780  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:39:15.368839  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:39:15.395430  530956 cri.go:89] found id: ""
	I1212 00:39:15.395444  530956 logs.go:282] 0 containers: []
	W1212 00:39:15.395451  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:39:15.395456  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:39:15.395522  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:39:15.420901  530956 cri.go:89] found id: ""
	I1212 00:39:15.420922  530956 logs.go:282] 0 containers: []
	W1212 00:39:15.420930  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:39:15.420935  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:39:15.420996  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:39:15.446341  530956 cri.go:89] found id: ""
	I1212 00:39:15.446355  530956 logs.go:282] 0 containers: []
	W1212 00:39:15.446362  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:39:15.446367  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:39:15.446425  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:39:15.472134  530956 cri.go:89] found id: ""
	I1212 00:39:15.472148  530956 logs.go:282] 0 containers: []
	W1212 00:39:15.472155  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:39:15.472160  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:39:15.472224  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:39:15.499707  530956 cri.go:89] found id: ""
	I1212 00:39:15.499721  530956 logs.go:282] 0 containers: []
	W1212 00:39:15.499729  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:39:15.499734  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:39:15.499803  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:39:15.525097  530956 cri.go:89] found id: ""
	I1212 00:39:15.525111  530956 logs.go:282] 0 containers: []
	W1212 00:39:15.525119  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:39:15.525126  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:39:15.525141  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:39:15.591570  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:39:15.591589  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:39:15.606307  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:39:15.606323  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:39:15.671615  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:39:15.663912   15590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:15.664737   15590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:15.666223   15590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:15.666722   15590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:15.668013   15590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:39:15.663912   15590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:15.664737   15590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:15.666223   15590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:15.666722   15590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:15.668013   15590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:39:15.671625  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:39:15.671640  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:39:15.740633  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:39:15.740680  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:39:18.284352  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:39:18.294497  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:39:18.294570  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:39:18.320150  530956 cri.go:89] found id: ""
	I1212 00:39:18.320164  530956 logs.go:282] 0 containers: []
	W1212 00:39:18.320173  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:39:18.320178  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:39:18.320236  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:39:18.346472  530956 cri.go:89] found id: ""
	I1212 00:39:18.346486  530956 logs.go:282] 0 containers: []
	W1212 00:39:18.346493  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:39:18.346498  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:39:18.346556  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:39:18.377328  530956 cri.go:89] found id: ""
	I1212 00:39:18.377342  530956 logs.go:282] 0 containers: []
	W1212 00:39:18.377349  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:39:18.377354  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:39:18.377411  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:39:18.402792  530956 cri.go:89] found id: ""
	I1212 00:39:18.402813  530956 logs.go:282] 0 containers: []
	W1212 00:39:18.402820  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:39:18.402826  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:39:18.402889  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:39:18.433183  530956 cri.go:89] found id: ""
	I1212 00:39:18.433198  530956 logs.go:282] 0 containers: []
	W1212 00:39:18.433205  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:39:18.433210  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:39:18.433272  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:39:18.458993  530956 cri.go:89] found id: ""
	I1212 00:39:18.459007  530956 logs.go:282] 0 containers: []
	W1212 00:39:18.459015  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:39:18.459020  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:39:18.459082  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:39:18.483237  530956 cri.go:89] found id: ""
	I1212 00:39:18.483251  530956 logs.go:282] 0 containers: []
	W1212 00:39:18.483258  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:39:18.483267  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:39:18.483276  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:39:18.549785  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:39:18.549803  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:39:18.564675  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:39:18.564692  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:39:18.635252  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:39:18.622293   15699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:18.627996   15699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:18.628725   15699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:18.629829   15699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:18.630310   15699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:39:18.622293   15699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:18.627996   15699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:18.628725   15699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:18.629829   15699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:18.630310   15699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:39:18.635261  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:39:18.635271  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:39:18.704032  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:39:18.704054  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:39:21.245504  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:39:21.256336  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:39:21.256398  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:39:21.282849  530956 cri.go:89] found id: ""
	I1212 00:39:21.282863  530956 logs.go:282] 0 containers: []
	W1212 00:39:21.282871  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:39:21.282878  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:39:21.282936  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:39:21.309330  530956 cri.go:89] found id: ""
	I1212 00:39:21.309344  530956 logs.go:282] 0 containers: []
	W1212 00:39:21.309351  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:39:21.309359  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:39:21.309419  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:39:21.338973  530956 cri.go:89] found id: ""
	I1212 00:39:21.338986  530956 logs.go:282] 0 containers: []
	W1212 00:39:21.338994  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:39:21.338999  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:39:21.339064  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:39:21.366261  530956 cri.go:89] found id: ""
	I1212 00:39:21.366275  530956 logs.go:282] 0 containers: []
	W1212 00:39:21.366282  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:39:21.366287  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:39:21.366346  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:39:21.393801  530956 cri.go:89] found id: ""
	I1212 00:39:21.393815  530956 logs.go:282] 0 containers: []
	W1212 00:39:21.393822  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:39:21.393827  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:39:21.393888  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:39:21.418339  530956 cri.go:89] found id: ""
	I1212 00:39:21.418353  530956 logs.go:282] 0 containers: []
	W1212 00:39:21.418360  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:39:21.418365  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:39:21.418425  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:39:21.443336  530956 cri.go:89] found id: ""
	I1212 00:39:21.443350  530956 logs.go:282] 0 containers: []
	W1212 00:39:21.443356  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:39:21.443364  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:39:21.443375  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:39:21.470973  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:39:21.470988  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:39:21.540182  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:39:21.540203  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:39:21.554835  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:39:21.554851  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:39:21.618440  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:39:21.609987   15813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:21.610798   15813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:21.612355   15813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:21.612867   15813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:21.614445   15813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:39:21.609987   15813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:21.610798   15813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:21.612355   15813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:21.612867   15813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:21.614445   15813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:39:21.618450  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:39:21.618460  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:39:24.186363  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:39:24.196446  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:39:24.196514  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:39:24.228176  530956 cri.go:89] found id: ""
	I1212 00:39:24.228189  530956 logs.go:282] 0 containers: []
	W1212 00:39:24.228196  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:39:24.228201  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:39:24.228263  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:39:24.252432  530956 cri.go:89] found id: ""
	I1212 00:39:24.252446  530956 logs.go:282] 0 containers: []
	W1212 00:39:24.252453  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:39:24.252458  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:39:24.252517  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:39:24.277088  530956 cri.go:89] found id: ""
	I1212 00:39:24.277102  530956 logs.go:282] 0 containers: []
	W1212 00:39:24.277109  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:39:24.277113  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:39:24.277172  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:39:24.301976  530956 cri.go:89] found id: ""
	I1212 00:39:24.301989  530956 logs.go:282] 0 containers: []
	W1212 00:39:24.301996  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:39:24.302001  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:39:24.302058  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:39:24.326771  530956 cri.go:89] found id: ""
	I1212 00:39:24.326785  530956 logs.go:282] 0 containers: []
	W1212 00:39:24.326792  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:39:24.326797  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:39:24.326858  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:39:24.352740  530956 cri.go:89] found id: ""
	I1212 00:39:24.352754  530956 logs.go:282] 0 containers: []
	W1212 00:39:24.352761  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:39:24.352766  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:39:24.352825  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:39:24.379469  530956 cri.go:89] found id: ""
	I1212 00:39:24.379483  530956 logs.go:282] 0 containers: []
	W1212 00:39:24.379490  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:39:24.379498  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:39:24.379508  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:39:24.407400  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:39:24.407417  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:39:24.473931  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:39:24.473951  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:39:24.488478  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:39:24.488494  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:39:24.552073  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:39:24.544053   15923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:24.544795   15923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:24.546281   15923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:24.546877   15923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:24.548351   15923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:39:24.544053   15923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:24.544795   15923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:24.546281   15923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:24.546877   15923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:24.548351   15923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:39:24.552083  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:39:24.552093  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:39:27.124323  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:39:27.134160  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:39:27.134218  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:39:27.161224  530956 cri.go:89] found id: ""
	I1212 00:39:27.161239  530956 logs.go:282] 0 containers: []
	W1212 00:39:27.161247  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:39:27.161253  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:39:27.161317  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:39:27.185561  530956 cri.go:89] found id: ""
	I1212 00:39:27.185575  530956 logs.go:282] 0 containers: []
	W1212 00:39:27.185582  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:39:27.185587  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:39:27.185647  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:39:27.212949  530956 cri.go:89] found id: ""
	I1212 00:39:27.212962  530956 logs.go:282] 0 containers: []
	W1212 00:39:27.212969  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:39:27.212974  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:39:27.213035  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:39:27.237907  530956 cri.go:89] found id: ""
	I1212 00:39:27.237921  530956 logs.go:282] 0 containers: []
	W1212 00:39:27.237928  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:39:27.237933  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:39:27.237991  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:39:27.264773  530956 cri.go:89] found id: ""
	I1212 00:39:27.264787  530956 logs.go:282] 0 containers: []
	W1212 00:39:27.264794  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:39:27.264799  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:39:27.264858  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:39:27.290448  530956 cri.go:89] found id: ""
	I1212 00:39:27.290462  530956 logs.go:282] 0 containers: []
	W1212 00:39:27.290469  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:39:27.290474  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:39:27.290531  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:39:27.315823  530956 cri.go:89] found id: ""
	I1212 00:39:27.315837  530956 logs.go:282] 0 containers: []
	W1212 00:39:27.315844  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:39:27.315852  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:39:27.315863  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:39:27.389757  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:39:27.389777  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:39:27.422043  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:39:27.422059  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:39:27.492490  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:39:27.492509  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:39:27.507777  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:39:27.507793  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:39:27.571981  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:39:27.563800   16033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:27.564533   16033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:27.566031   16033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:27.566607   16033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:27.568082   16033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:39:27.563800   16033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:27.564533   16033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:27.566031   16033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:27.566607   16033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:27.568082   16033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:39:30.074632  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:39:30.089373  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:39:30.089465  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:39:30.125906  530956 cri.go:89] found id: ""
	I1212 00:39:30.125923  530956 logs.go:282] 0 containers: []
	W1212 00:39:30.125931  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:39:30.125939  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:39:30.126019  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:39:30.159780  530956 cri.go:89] found id: ""
	I1212 00:39:30.159796  530956 logs.go:282] 0 containers: []
	W1212 00:39:30.159804  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:39:30.159810  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:39:30.159878  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:39:30.186451  530956 cri.go:89] found id: ""
	I1212 00:39:30.186466  530956 logs.go:282] 0 containers: []
	W1212 00:39:30.186473  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:39:30.186478  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:39:30.186541  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:39:30.212831  530956 cri.go:89] found id: ""
	I1212 00:39:30.212846  530956 logs.go:282] 0 containers: []
	W1212 00:39:30.212859  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:39:30.212864  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:39:30.212926  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:39:30.239897  530956 cri.go:89] found id: ""
	I1212 00:39:30.239912  530956 logs.go:282] 0 containers: []
	W1212 00:39:30.239919  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:39:30.239924  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:39:30.239987  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:39:30.265595  530956 cri.go:89] found id: ""
	I1212 00:39:30.265610  530956 logs.go:282] 0 containers: []
	W1212 00:39:30.265618  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:39:30.265623  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:39:30.265684  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:39:30.293057  530956 cri.go:89] found id: ""
	I1212 00:39:30.293072  530956 logs.go:282] 0 containers: []
	W1212 00:39:30.293079  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:39:30.293087  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:39:30.293098  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:39:30.360384  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:39:30.360403  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:39:30.375514  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:39:30.375533  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:39:30.445622  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:39:30.436678   16126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:30.437400   16126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:30.439405   16126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:30.440010   16126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:30.441699   16126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:39:30.436678   16126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:30.437400   16126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:30.439405   16126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:30.440010   16126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:30.441699   16126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:39:30.445632  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:39:30.445642  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:39:30.514984  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:39:30.515002  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:39:33.046486  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:39:33.057328  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:39:33.057389  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:39:33.082571  530956 cri.go:89] found id: ""
	I1212 00:39:33.082584  530956 logs.go:282] 0 containers: []
	W1212 00:39:33.082592  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:39:33.082597  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:39:33.082656  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:39:33.107156  530956 cri.go:89] found id: ""
	I1212 00:39:33.107169  530956 logs.go:282] 0 containers: []
	W1212 00:39:33.107176  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:39:33.107181  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:39:33.107242  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:39:33.132433  530956 cri.go:89] found id: ""
	I1212 00:39:33.132448  530956 logs.go:282] 0 containers: []
	W1212 00:39:33.132456  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:39:33.132460  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:39:33.132524  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:39:33.158141  530956 cri.go:89] found id: ""
	I1212 00:39:33.158155  530956 logs.go:282] 0 containers: []
	W1212 00:39:33.158162  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:39:33.158167  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:39:33.158229  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:39:33.185335  530956 cri.go:89] found id: ""
	I1212 00:39:33.185350  530956 logs.go:282] 0 containers: []
	W1212 00:39:33.185357  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:39:33.185362  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:39:33.185423  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:39:33.214702  530956 cri.go:89] found id: ""
	I1212 00:39:33.214716  530956 logs.go:282] 0 containers: []
	W1212 00:39:33.214731  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:39:33.214738  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:39:33.214798  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:39:33.239415  530956 cri.go:89] found id: ""
	I1212 00:39:33.239429  530956 logs.go:282] 0 containers: []
	W1212 00:39:33.239436  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:39:33.239444  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:39:33.239462  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:39:33.303881  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:39:33.303900  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:39:33.318306  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:39:33.318324  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:39:33.385940  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:39:33.376699   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:33.377337   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:33.379127   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:33.379736   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:33.381336   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:39:33.376699   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:33.377337   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:33.379127   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:33.379736   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:33.381336   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:39:33.385950  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:39:33.385961  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:39:33.453867  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:39:33.453884  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:39:35.983022  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:39:35.993721  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:39:35.993785  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:39:36.032639  530956 cri.go:89] found id: ""
	I1212 00:39:36.032654  530956 logs.go:282] 0 containers: []
	W1212 00:39:36.032662  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:39:36.032667  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:39:36.032737  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:39:36.069795  530956 cri.go:89] found id: ""
	I1212 00:39:36.069810  530956 logs.go:282] 0 containers: []
	W1212 00:39:36.069817  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:39:36.069822  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:39:36.069882  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:39:36.099096  530956 cri.go:89] found id: ""
	I1212 00:39:36.099111  530956 logs.go:282] 0 containers: []
	W1212 00:39:36.099118  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:39:36.099124  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:39:36.099184  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:39:36.128685  530956 cri.go:89] found id: ""
	I1212 00:39:36.128699  530956 logs.go:282] 0 containers: []
	W1212 00:39:36.128706  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:39:36.128711  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:39:36.128772  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:39:36.154641  530956 cri.go:89] found id: ""
	I1212 00:39:36.154654  530956 logs.go:282] 0 containers: []
	W1212 00:39:36.154662  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:39:36.154666  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:39:36.154762  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:39:36.179316  530956 cri.go:89] found id: ""
	I1212 00:39:36.179330  530956 logs.go:282] 0 containers: []
	W1212 00:39:36.179338  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:39:36.179343  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:39:36.179402  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:39:36.205036  530956 cri.go:89] found id: ""
	I1212 00:39:36.205050  530956 logs.go:282] 0 containers: []
	W1212 00:39:36.205057  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:39:36.205066  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:39:36.205079  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:39:36.271067  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:39:36.271086  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:39:36.285990  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:39:36.286006  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:39:36.350986  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:39:36.343284   16330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:36.343819   16330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:36.345314   16330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:36.345743   16330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:36.347205   16330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:39:36.343284   16330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:36.343819   16330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:36.345314   16330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:36.345743   16330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:36.347205   16330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:39:36.350996  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:39:36.351005  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:39:36.418783  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:39:36.418803  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:39:38.948706  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:39:38.958630  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:39:38.958705  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:39:38.988268  530956 cri.go:89] found id: ""
	I1212 00:39:38.988282  530956 logs.go:282] 0 containers: []
	W1212 00:39:38.988289  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:39:38.988294  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:39:38.988372  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:39:39.017066  530956 cri.go:89] found id: ""
	I1212 00:39:39.017088  530956 logs.go:282] 0 containers: []
	W1212 00:39:39.017095  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:39:39.017100  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:39:39.017158  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:39:39.044203  530956 cri.go:89] found id: ""
	I1212 00:39:39.044217  530956 logs.go:282] 0 containers: []
	W1212 00:39:39.044223  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:39:39.044232  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:39:39.044293  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:39:39.073574  530956 cri.go:89] found id: ""
	I1212 00:39:39.073588  530956 logs.go:282] 0 containers: []
	W1212 00:39:39.073595  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:39:39.073600  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:39:39.073658  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:39:39.098254  530956 cri.go:89] found id: ""
	I1212 00:39:39.098267  530956 logs.go:282] 0 containers: []
	W1212 00:39:39.098274  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:39:39.098279  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:39:39.098338  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:39:39.122552  530956 cri.go:89] found id: ""
	I1212 00:39:39.122566  530956 logs.go:282] 0 containers: []
	W1212 00:39:39.122573  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:39:39.122578  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:39:39.122641  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:39:39.149933  530956 cri.go:89] found id: ""
	I1212 00:39:39.149947  530956 logs.go:282] 0 containers: []
	W1212 00:39:39.149954  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:39:39.149961  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:39:39.149972  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:39:39.164970  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:39:39.164986  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:39:39.228249  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:39:39.219299   16431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:39.219833   16431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:39.221740   16431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:39.222278   16431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:39.223944   16431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:39:39.219299   16431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:39.219833   16431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:39.221740   16431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:39.222278   16431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:39.223944   16431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:39:39.228259  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:39:39.228272  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:39:39.295712  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:39:39.295731  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:39:39.326861  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:39:39.326879  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:39:41.894749  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:39:41.904730  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:39:41.904791  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:39:41.929481  530956 cri.go:89] found id: ""
	I1212 00:39:41.929494  530956 logs.go:282] 0 containers: []
	W1212 00:39:41.929501  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:39:41.929506  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:39:41.929564  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:39:41.956371  530956 cri.go:89] found id: ""
	I1212 00:39:41.956385  530956 logs.go:282] 0 containers: []
	W1212 00:39:41.956392  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:39:41.956397  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:39:41.956453  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:39:41.998298  530956 cri.go:89] found id: ""
	I1212 00:39:41.998313  530956 logs.go:282] 0 containers: []
	W1212 00:39:41.998327  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:39:41.998332  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:39:41.998394  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:39:42.039528  530956 cri.go:89] found id: ""
	I1212 00:39:42.039542  530956 logs.go:282] 0 containers: []
	W1212 00:39:42.039549  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:39:42.039554  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:39:42.039617  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:39:42.071895  530956 cri.go:89] found id: ""
	I1212 00:39:42.071909  530956 logs.go:282] 0 containers: []
	W1212 00:39:42.071918  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:39:42.071923  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:39:42.071999  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:39:42.104807  530956 cri.go:89] found id: ""
	I1212 00:39:42.104823  530956 logs.go:282] 0 containers: []
	W1212 00:39:42.104831  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:39:42.104837  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:39:42.104914  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:39:42.139871  530956 cri.go:89] found id: ""
	I1212 00:39:42.139886  530956 logs.go:282] 0 containers: []
	W1212 00:39:42.139894  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:39:42.139903  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:39:42.139917  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:39:42.221872  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:39:42.211579   16534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:42.212551   16534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:42.214428   16534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:42.215573   16534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:42.216630   16534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:39:42.211579   16534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:42.212551   16534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:42.214428   16534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:42.215573   16534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:42.216630   16534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:39:42.221883  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:39:42.221894  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:39:42.294247  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:39:42.294267  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:39:42.327229  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:39:42.327245  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:39:42.396289  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:39:42.396308  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:39:44.911333  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:39:44.921559  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:39:44.921618  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:39:44.947811  530956 cri.go:89] found id: ""
	I1212 00:39:44.947825  530956 logs.go:282] 0 containers: []
	W1212 00:39:44.947832  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:39:44.947837  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:39:44.947898  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:39:44.974488  530956 cri.go:89] found id: ""
	I1212 00:39:44.974502  530956 logs.go:282] 0 containers: []
	W1212 00:39:44.974509  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:39:44.974514  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:39:44.974578  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:39:45.062335  530956 cri.go:89] found id: ""
	I1212 00:39:45.062350  530956 logs.go:282] 0 containers: []
	W1212 00:39:45.062358  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:39:45.062363  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:39:45.062431  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:39:45.115594  530956 cri.go:89] found id: ""
	I1212 00:39:45.115611  530956 logs.go:282] 0 containers: []
	W1212 00:39:45.115621  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:39:45.115627  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:39:45.115695  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:39:45.157432  530956 cri.go:89] found id: ""
	I1212 00:39:45.157449  530956 logs.go:282] 0 containers: []
	W1212 00:39:45.157457  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:39:45.157463  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:39:45.157542  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:39:45.199222  530956 cri.go:89] found id: ""
	I1212 00:39:45.199237  530956 logs.go:282] 0 containers: []
	W1212 00:39:45.199247  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:39:45.199252  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:39:45.199327  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:39:45.277211  530956 cri.go:89] found id: ""
	I1212 00:39:45.277239  530956 logs.go:282] 0 containers: []
	W1212 00:39:45.277248  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:39:45.277256  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:39:45.277272  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:39:45.354665  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:39:45.354742  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:39:45.370015  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:39:45.370032  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:39:45.437294  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:39:45.428349   16647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:45.429025   16647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:45.430763   16647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:45.431362   16647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:45.433211   16647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:39:45.428349   16647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:45.429025   16647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:45.430763   16647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:45.431362   16647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:45.433211   16647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:39:45.437306  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:39:45.437317  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:39:45.506731  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:39:45.506752  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:39:48.035477  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:39:48.045681  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:39:48.045741  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:39:48.076045  530956 cri.go:89] found id: ""
	I1212 00:39:48.076059  530956 logs.go:282] 0 containers: []
	W1212 00:39:48.076066  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:39:48.076072  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:39:48.076135  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:39:48.110061  530956 cri.go:89] found id: ""
	I1212 00:39:48.110074  530956 logs.go:282] 0 containers: []
	W1212 00:39:48.110082  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:39:48.110087  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:39:48.110146  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:39:48.134924  530956 cri.go:89] found id: ""
	I1212 00:39:48.134939  530956 logs.go:282] 0 containers: []
	W1212 00:39:48.134946  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:39:48.134951  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:39:48.135014  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:39:48.160105  530956 cri.go:89] found id: ""
	I1212 00:39:48.160119  530956 logs.go:282] 0 containers: []
	W1212 00:39:48.160126  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:39:48.160131  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:39:48.160199  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:39:48.185148  530956 cri.go:89] found id: ""
	I1212 00:39:48.185162  530956 logs.go:282] 0 containers: []
	W1212 00:39:48.185169  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:39:48.185174  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:39:48.185236  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:39:48.210105  530956 cri.go:89] found id: ""
	I1212 00:39:48.210119  530956 logs.go:282] 0 containers: []
	W1212 00:39:48.210127  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:39:48.210132  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:39:48.210198  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:39:48.234723  530956 cri.go:89] found id: ""
	I1212 00:39:48.234736  530956 logs.go:282] 0 containers: []
	W1212 00:39:48.234743  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:39:48.234752  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:39:48.234762  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:39:48.264606  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:39:48.264624  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:39:48.333093  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:39:48.333111  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:39:48.348065  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:39:48.348080  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:39:48.410868  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:39:48.402563   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:48.403327   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:48.405002   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:48.405468   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:48.407068   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:39:48.402563   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:48.403327   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:48.405002   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:48.405468   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:48.407068   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:39:48.410879  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:39:48.410891  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:39:50.982598  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:39:50.995299  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:39:50.995361  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:39:51.032326  530956 cri.go:89] found id: ""
	I1212 00:39:51.032340  530956 logs.go:282] 0 containers: []
	W1212 00:39:51.032348  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:39:51.032353  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:39:51.032412  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:39:51.060416  530956 cri.go:89] found id: ""
	I1212 00:39:51.060435  530956 logs.go:282] 0 containers: []
	W1212 00:39:51.060444  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:39:51.060448  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:39:51.060525  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:39:51.087755  530956 cri.go:89] found id: ""
	I1212 00:39:51.087769  530956 logs.go:282] 0 containers: []
	W1212 00:39:51.087777  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:39:51.087783  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:39:51.087844  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:39:51.113932  530956 cri.go:89] found id: ""
	I1212 00:39:51.113946  530956 logs.go:282] 0 containers: []
	W1212 00:39:51.113954  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:39:51.113959  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:39:51.114017  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:39:51.141585  530956 cri.go:89] found id: ""
	I1212 00:39:51.141599  530956 logs.go:282] 0 containers: []
	W1212 00:39:51.141607  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:39:51.141612  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:39:51.141678  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:39:51.169491  530956 cri.go:89] found id: ""
	I1212 00:39:51.169506  530956 logs.go:282] 0 containers: []
	W1212 00:39:51.169513  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:39:51.169518  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:39:51.169577  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:39:51.195655  530956 cri.go:89] found id: ""
	I1212 00:39:51.195668  530956 logs.go:282] 0 containers: []
	W1212 00:39:51.195676  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:39:51.195684  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:39:51.195694  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:39:51.264764  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:39:51.264785  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:39:51.291612  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:39:51.291628  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:39:51.359746  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:39:51.359764  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:39:51.374319  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:39:51.374340  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:39:51.437078  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:39:51.428471   16867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:51.429034   16867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:51.430488   16867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:51.431070   16867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:51.432638   16867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:39:51.428471   16867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:51.429034   16867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:51.430488   16867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:51.431070   16867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:51.432638   16867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:39:53.938110  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:39:53.948663  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:39:53.948763  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:39:53.987477  530956 cri.go:89] found id: ""
	I1212 00:39:53.987490  530956 logs.go:282] 0 containers: []
	W1212 00:39:53.987497  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:39:53.987502  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:39:53.987565  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:39:54.026859  530956 cri.go:89] found id: ""
	I1212 00:39:54.026873  530956 logs.go:282] 0 containers: []
	W1212 00:39:54.026881  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:39:54.026897  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:39:54.026958  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:39:54.054638  530956 cri.go:89] found id: ""
	I1212 00:39:54.054652  530956 logs.go:282] 0 containers: []
	W1212 00:39:54.054659  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:39:54.054664  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:39:54.054820  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:39:54.080864  530956 cri.go:89] found id: ""
	I1212 00:39:54.080879  530956 logs.go:282] 0 containers: []
	W1212 00:39:54.080886  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:39:54.080891  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:39:54.080958  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:39:54.106972  530956 cri.go:89] found id: ""
	I1212 00:39:54.106986  530956 logs.go:282] 0 containers: []
	W1212 00:39:54.106993  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:39:54.106998  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:39:54.107056  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:39:54.131665  530956 cri.go:89] found id: ""
	I1212 00:39:54.131678  530956 logs.go:282] 0 containers: []
	W1212 00:39:54.131686  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:39:54.131692  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:39:54.131749  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:39:54.155857  530956 cri.go:89] found id: ""
	I1212 00:39:54.155870  530956 logs.go:282] 0 containers: []
	W1212 00:39:54.155877  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:39:54.155885  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:39:54.155895  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:39:54.225662  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:39:54.216735   16954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:54.217433   16954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:54.219268   16954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:54.219827   16954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:54.221703   16954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:39:54.216735   16954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:54.217433   16954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:54.219268   16954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:54.219827   16954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:54.221703   16954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:39:54.225675  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:39:54.225686  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:39:54.297964  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:39:54.297992  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:39:54.330016  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:39:54.330041  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:39:54.401820  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:39:54.401842  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:39:56.918391  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:39:56.929720  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:39:56.929780  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:39:56.955459  530956 cri.go:89] found id: ""
	I1212 00:39:56.955473  530956 logs.go:282] 0 containers: []
	W1212 00:39:56.955480  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:39:56.955485  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:39:56.955543  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:39:56.987918  530956 cri.go:89] found id: ""
	I1212 00:39:56.987932  530956 logs.go:282] 0 containers: []
	W1212 00:39:56.987939  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:39:56.987944  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:39:56.988002  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:39:57.020006  530956 cri.go:89] found id: ""
	I1212 00:39:57.020020  530956 logs.go:282] 0 containers: []
	W1212 00:39:57.020033  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:39:57.020038  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:39:57.020115  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:39:57.048442  530956 cri.go:89] found id: ""
	I1212 00:39:57.048467  530956 logs.go:282] 0 containers: []
	W1212 00:39:57.048475  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:39:57.048483  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:39:57.048552  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:39:57.074435  530956 cri.go:89] found id: ""
	I1212 00:39:57.074449  530956 logs.go:282] 0 containers: []
	W1212 00:39:57.074456  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:39:57.074461  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:39:57.074521  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:39:57.099293  530956 cri.go:89] found id: ""
	I1212 00:39:57.099307  530956 logs.go:282] 0 containers: []
	W1212 00:39:57.099315  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:39:57.099320  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:39:57.099379  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:39:57.125629  530956 cri.go:89] found id: ""
	I1212 00:39:57.125651  530956 logs.go:282] 0 containers: []
	W1212 00:39:57.125659  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:39:57.125666  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:39:57.125676  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:39:57.155351  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:39:57.155367  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:39:57.220025  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:39:57.220044  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:39:57.234981  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:39:57.235003  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:39:57.300835  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:39:57.292962   17075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:57.293535   17075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:57.295085   17075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:57.295551   17075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:57.297012   17075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:39:57.292962   17075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:57.293535   17075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:57.295085   17075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:57.295551   17075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:57.297012   17075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:39:57.300845  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:39:57.300856  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:39:59.869530  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:39:59.882048  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:39:59.882110  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:39:59.907681  530956 cri.go:89] found id: ""
	I1212 00:39:59.907696  530956 logs.go:282] 0 containers: []
	W1212 00:39:59.907703  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:39:59.907708  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:39:59.907775  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:39:59.932480  530956 cri.go:89] found id: ""
	I1212 00:39:59.932494  530956 logs.go:282] 0 containers: []
	W1212 00:39:59.932509  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:39:59.932515  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:39:59.932583  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:39:59.958173  530956 cri.go:89] found id: ""
	I1212 00:39:59.958188  530956 logs.go:282] 0 containers: []
	W1212 00:39:59.958195  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:39:59.958200  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:39:59.958261  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:39:59.990305  530956 cri.go:89] found id: ""
	I1212 00:39:59.990319  530956 logs.go:282] 0 containers: []
	W1212 00:39:59.990326  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:39:59.990331  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:39:59.990390  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:40:00.115674  530956 cri.go:89] found id: ""
	I1212 00:40:00.115690  530956 logs.go:282] 0 containers: []
	W1212 00:40:00.115699  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:40:00.115705  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:40:00.115778  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:40:00.211546  530956 cri.go:89] found id: ""
	I1212 00:40:00.211573  530956 logs.go:282] 0 containers: []
	W1212 00:40:00.211583  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:40:00.211589  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:40:00.211670  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:40:00.306176  530956 cri.go:89] found id: ""
	I1212 00:40:00.306192  530956 logs.go:282] 0 containers: []
	W1212 00:40:00.306200  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:40:00.306208  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:40:00.306220  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:40:00.433331  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:40:00.433360  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:40:00.458175  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:40:00.458193  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:40:00.603203  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:40:00.592976   17170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:40:00.593818   17170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:40:00.596110   17170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:40:00.596864   17170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:40:00.598662   17170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:40:00.592976   17170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:40:00.593818   17170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:40:00.596110   17170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:40:00.596864   17170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:40:00.598662   17170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:40:00.603213  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:40:00.603224  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:40:00.674062  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:40:00.674086  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:40:03.207059  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:40:03.217144  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:40:03.217203  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:40:03.242377  530956 cri.go:89] found id: ""
	I1212 00:40:03.242391  530956 logs.go:282] 0 containers: []
	W1212 00:40:03.242398  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:40:03.242403  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:40:03.242460  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:40:03.268604  530956 cri.go:89] found id: ""
	I1212 00:40:03.268618  530956 logs.go:282] 0 containers: []
	W1212 00:40:03.268625  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:40:03.268630  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:40:03.268691  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:40:03.293354  530956 cri.go:89] found id: ""
	I1212 00:40:03.293367  530956 logs.go:282] 0 containers: []
	W1212 00:40:03.293374  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:40:03.293379  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:40:03.293437  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:40:03.323082  530956 cri.go:89] found id: ""
	I1212 00:40:03.323095  530956 logs.go:282] 0 containers: []
	W1212 00:40:03.323102  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:40:03.323108  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:40:03.323165  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:40:03.348118  530956 cri.go:89] found id: ""
	I1212 00:40:03.348132  530956 logs.go:282] 0 containers: []
	W1212 00:40:03.348138  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:40:03.348144  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:40:03.348203  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:40:03.375333  530956 cri.go:89] found id: ""
	I1212 00:40:03.375346  530956 logs.go:282] 0 containers: []
	W1212 00:40:03.375353  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:40:03.375358  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:40:03.375418  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:40:03.401835  530956 cri.go:89] found id: ""
	I1212 00:40:03.401850  530956 logs.go:282] 0 containers: []
	W1212 00:40:03.401857  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:40:03.401864  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:40:03.401882  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:40:03.467887  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:40:03.459632   17271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:40:03.460370   17271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:40:03.461940   17271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:40:03.462285   17271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:40:03.463794   17271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:40:03.459632   17271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:40:03.460370   17271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:40:03.461940   17271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:40:03.462285   17271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:40:03.463794   17271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:40:03.467897  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:40:03.467907  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:40:03.536174  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:40:03.536194  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:40:03.564970  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:40:03.564985  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:40:03.632350  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:40:03.632369  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:40:06.147945  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:40:06.157971  530956 kubeadm.go:602] duration metric: took 4m2.720434125s to restartPrimaryControlPlane
	W1212 00:40:06.158027  530956 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1212 00:40:06.158103  530956 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1212 00:40:06.569482  530956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:40:06.582591  530956 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 00:40:06.590536  530956 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 00:40:06.590592  530956 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 00:40:06.598618  530956 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 00:40:06.598629  530956 kubeadm.go:158] found existing configuration files:
	
	I1212 00:40:06.598698  530956 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1212 00:40:06.606769  530956 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 00:40:06.606840  530956 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 00:40:06.614547  530956 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1212 00:40:06.622660  530956 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 00:40:06.622739  530956 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 00:40:06.630003  530956 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1212 00:40:06.638125  530956 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 00:40:06.638179  530956 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 00:40:06.645410  530956 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1212 00:40:06.652882  530956 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 00:40:06.652943  530956 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 00:40:06.660446  530956 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 00:40:06.700514  530956 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 00:40:06.700561  530956 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 00:40:06.776561  530956 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 00:40:06.776625  530956 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1212 00:40:06.776659  530956 kubeadm.go:319] OS: Linux
	I1212 00:40:06.776702  530956 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 00:40:06.776749  530956 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 00:40:06.776795  530956 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 00:40:06.776842  530956 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 00:40:06.776889  530956 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 00:40:06.776936  530956 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 00:40:06.776980  530956 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 00:40:06.777026  530956 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 00:40:06.777077  530956 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 00:40:06.848361  530956 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 00:40:06.848476  530956 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 00:40:06.848571  530956 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 00:40:06.858454  530956 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 00:40:06.861922  530956 out.go:252]   - Generating certificates and keys ...
	I1212 00:40:06.862039  530956 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 00:40:06.862113  530956 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 00:40:06.862184  530956 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 00:40:06.862240  530956 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1212 00:40:06.862305  530956 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 00:40:06.862362  530956 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1212 00:40:06.862420  530956 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1212 00:40:06.862477  530956 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1212 00:40:06.862546  530956 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 00:40:06.862613  530956 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 00:40:06.862665  530956 kubeadm.go:319] [certs] Using the existing "sa" key
	I1212 00:40:06.862736  530956 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 00:40:07.126544  530956 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 00:40:07.166854  530956 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 00:40:07.523509  530956 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 00:40:07.692785  530956 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 00:40:07.825726  530956 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 00:40:07.826395  530956 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 00:40:07.830778  530956 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 00:40:07.833963  530956 out.go:252]   - Booting up control plane ...
	I1212 00:40:07.834090  530956 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 00:40:07.834172  530956 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 00:40:07.835198  530956 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 00:40:07.850333  530956 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 00:40:07.850580  530956 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 00:40:07.857863  530956 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 00:40:07.858096  530956 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 00:40:07.858271  530956 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 00:40:07.986589  530956 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 00:40:07.986752  530956 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 00:44:07.988367  530956 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001882345s
	I1212 00:44:07.988392  530956 kubeadm.go:319] 
	I1212 00:44:07.988471  530956 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 00:44:07.988504  530956 kubeadm.go:319] 	- The kubelet is not running
	I1212 00:44:07.988626  530956 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 00:44:07.988630  530956 kubeadm.go:319] 
	I1212 00:44:07.988743  530956 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 00:44:07.988774  530956 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 00:44:07.988810  530956 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 00:44:07.988814  530956 kubeadm.go:319] 
	I1212 00:44:07.993727  530956 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1212 00:44:07.994213  530956 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1212 00:44:07.994355  530956 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 00:44:07.994630  530956 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1212 00:44:07.994638  530956 kubeadm.go:319] 
	I1212 00:44:07.994738  530956 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1212 00:44:07.994866  530956 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001882345s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1212 00:44:07.994955  530956 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1212 00:44:08.418732  530956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:44:08.431583  530956 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 00:44:08.431639  530956 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 00:44:08.439724  530956 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 00:44:08.439733  530956 kubeadm.go:158] found existing configuration files:
	
	I1212 00:44:08.439785  530956 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1212 00:44:08.447652  530956 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 00:44:08.447708  530956 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 00:44:08.454853  530956 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1212 00:44:08.462499  530956 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 00:44:08.462562  530956 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 00:44:08.470106  530956 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1212 00:44:08.477811  530956 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 00:44:08.477868  530956 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 00:44:08.485348  530956 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1212 00:44:08.493142  530956 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 00:44:08.493207  530956 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 00:44:08.501010  530956 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 00:44:08.619087  530956 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1212 00:44:08.619550  530956 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1212 00:44:08.685435  530956 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 00:48:10.247562  530956 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1212 00:48:10.247592  530956 kubeadm.go:319] 
	I1212 00:48:10.247688  530956 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1212 00:48:10.252292  530956 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 00:48:10.252346  530956 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 00:48:10.252445  530956 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 00:48:10.252500  530956 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1212 00:48:10.252533  530956 kubeadm.go:319] OS: Linux
	I1212 00:48:10.252577  530956 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 00:48:10.252624  530956 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 00:48:10.252670  530956 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 00:48:10.252716  530956 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 00:48:10.252768  530956 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 00:48:10.252816  530956 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 00:48:10.252859  530956 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 00:48:10.252906  530956 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 00:48:10.252951  530956 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 00:48:10.253023  530956 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 00:48:10.253117  530956 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 00:48:10.253205  530956 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 00:48:10.253277  530956 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 00:48:10.256411  530956 out.go:252]   - Generating certificates and keys ...
	I1212 00:48:10.256515  530956 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 00:48:10.256580  530956 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 00:48:10.256656  530956 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 00:48:10.256724  530956 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1212 00:48:10.256818  530956 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 00:48:10.256878  530956 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1212 00:48:10.256941  530956 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1212 00:48:10.257008  530956 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1212 00:48:10.257086  530956 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 00:48:10.257157  530956 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 00:48:10.257195  530956 kubeadm.go:319] [certs] Using the existing "sa" key
	I1212 00:48:10.257249  530956 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 00:48:10.257299  530956 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 00:48:10.257355  530956 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 00:48:10.257407  530956 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 00:48:10.257469  530956 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 00:48:10.257524  530956 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 00:48:10.257609  530956 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 00:48:10.257674  530956 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 00:48:10.260574  530956 out.go:252]   - Booting up control plane ...
	I1212 00:48:10.260690  530956 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 00:48:10.260801  530956 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 00:48:10.260876  530956 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 00:48:10.260981  530956 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 00:48:10.261102  530956 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 00:48:10.261235  530956 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 00:48:10.261332  530956 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 00:48:10.261377  530956 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 00:48:10.261506  530956 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 00:48:10.261614  530956 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 00:48:10.261707  530956 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000091689s
	I1212 00:48:10.261721  530956 kubeadm.go:319] 
	I1212 00:48:10.261778  530956 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 00:48:10.261809  530956 kubeadm.go:319] 	- The kubelet is not running
	I1212 00:48:10.261921  530956 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 00:48:10.261925  530956 kubeadm.go:319] 
	I1212 00:48:10.262045  530956 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 00:48:10.262083  530956 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 00:48:10.262112  530956 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 00:48:10.262133  530956 kubeadm.go:319] 
	I1212 00:48:10.262182  530956 kubeadm.go:403] duration metric: took 12m6.858628348s to StartCluster
	I1212 00:48:10.262232  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:48:10.262300  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:48:10.289138  530956 cri.go:89] found id: ""
	I1212 00:48:10.289156  530956 logs.go:282] 0 containers: []
	W1212 00:48:10.289163  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:48:10.289168  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:48:10.289230  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:48:10.317667  530956 cri.go:89] found id: ""
	I1212 00:48:10.317681  530956 logs.go:282] 0 containers: []
	W1212 00:48:10.317689  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:48:10.317694  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:48:10.317758  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:48:10.347070  530956 cri.go:89] found id: ""
	I1212 00:48:10.347083  530956 logs.go:282] 0 containers: []
	W1212 00:48:10.347091  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:48:10.347096  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:48:10.347155  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:48:10.373637  530956 cri.go:89] found id: ""
	I1212 00:48:10.373650  530956 logs.go:282] 0 containers: []
	W1212 00:48:10.373658  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:48:10.373663  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:48:10.373722  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:48:10.401060  530956 cri.go:89] found id: ""
	I1212 00:48:10.401074  530956 logs.go:282] 0 containers: []
	W1212 00:48:10.401081  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:48:10.401086  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:48:10.401146  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:48:10.426271  530956 cri.go:89] found id: ""
	I1212 00:48:10.426296  530956 logs.go:282] 0 containers: []
	W1212 00:48:10.426303  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:48:10.426309  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:48:10.426375  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:48:10.451340  530956 cri.go:89] found id: ""
	I1212 00:48:10.451354  530956 logs.go:282] 0 containers: []
	W1212 00:48:10.451361  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:48:10.451369  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:48:10.451379  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:48:10.526222  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:48:10.526241  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:48:10.557574  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:48:10.557591  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:48:10.627641  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:48:10.627659  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:48:10.642797  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:48:10.642812  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:48:10.704719  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:48:10.695841   21093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:48:10.696445   21093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:48:10.698250   21093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:48:10.698875   21093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:48:10.700463   21093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:48:10.695841   21093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:48:10.696445   21093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:48:10.698250   21093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:48:10.698875   21093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:48:10.700463   21093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	W1212 00:48:10.704732  530956 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000091689s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1212 00:48:10.704782  530956 out.go:285] * 
	W1212 00:48:10.704838  530956 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000091689s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 00:48:10.704854  530956 out.go:285] * 
	W1212 00:48:10.706992  530956 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 00:48:10.711878  530956 out.go:203] 
	W1212 00:48:10.714724  530956 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000091689s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 00:48:10.714773  530956 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1212 00:48:10.714793  530956 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1212 00:48:10.717973  530956 out.go:203] 
	
	
	==> CRI-O <==
	Dec 12 00:36:02 functional-035643 crio[9906]: time="2025-12-12T00:36:02.167671353Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 12 00:36:02 functional-035643 crio[9906]: time="2025-12-12T00:36:02.167708004Z" level=info msg="Starting seccomp notifier watcher"
	Dec 12 00:36:02 functional-035643 crio[9906]: time="2025-12-12T00:36:02.167752311Z" level=info msg="Create NRI interface"
	Dec 12 00:36:02 functional-035643 crio[9906]: time="2025-12-12T00:36:02.167872177Z" level=info msg="built-in NRI default validator is disabled"
	Dec 12 00:36:02 functional-035643 crio[9906]: time="2025-12-12T00:36:02.167882466Z" level=info msg="runtime interface created"
	Dec 12 00:36:02 functional-035643 crio[9906]: time="2025-12-12T00:36:02.167904357Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 12 00:36:02 functional-035643 crio[9906]: time="2025-12-12T00:36:02.167910363Z" level=info msg="runtime interface starting up..."
	Dec 12 00:36:02 functional-035643 crio[9906]: time="2025-12-12T00:36:02.167919183Z" level=info msg="starting plugins..."
	Dec 12 00:36:02 functional-035643 crio[9906]: time="2025-12-12T00:36:02.167936889Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 12 00:36:02 functional-035643 crio[9906]: time="2025-12-12T00:36:02.168004375Z" level=info msg="No systemd watchdog enabled"
	Dec 12 00:36:02 functional-035643 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 12 00:40:06 functional-035643 crio[9906]: time="2025-12-12T00:40:06.854632207Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=640da022-2edf-494c-a660-79e3ab919eba name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:40:06 functional-035643 crio[9906]: time="2025-12-12T00:40:06.855342483Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=673ecd0d-a1ac-45d5-bb90-3e1f04cdc90f name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:40:06 functional-035643 crio[9906]: time="2025-12-12T00:40:06.855810714Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=c57f39b7-fb58-4f67-bde4-1b55c2187b3f name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:40:06 functional-035643 crio[9906]: time="2025-12-12T00:40:06.856291532Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=a97cf7ab-fcf0-4971-8a79-d2c53b6e4ee5 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:40:06 functional-035643 crio[9906]: time="2025-12-12T00:40:06.856721905Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=01bb9bbf-51cf-478f-81f3-99ec7edffcf4 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:40:06 functional-035643 crio[9906]: time="2025-12-12T00:40:06.857120764Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=272d9706-6818-4f2e-bd33-95134bf8fb23 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:40:06 functional-035643 crio[9906]: time="2025-12-12T00:40:06.857524931Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=50b82acc-740c-444d-8ec5-a3c84ad4b6d2 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:44:08 functional-035643 crio[9906]: time="2025-12-12T00:44:08.688638675Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=4b143437-6a5c-4f02-b714-2d1bb8cb5a7a name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:44:08 functional-035643 crio[9906]: time="2025-12-12T00:44:08.689301235Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=54834de4-a19b-47fe-b478-123a3a9a03c9 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:44:08 functional-035643 crio[9906]: time="2025-12-12T00:44:08.689852011Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=a783bb1e-a84b-4d8e-b3d6-349f1b7407cf name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:44:08 functional-035643 crio[9906]: time="2025-12-12T00:44:08.690318153Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=d1830fa6-0c29-40cb-a67f-5512d68b4fbf name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:44:08 functional-035643 crio[9906]: time="2025-12-12T00:44:08.691052318Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=28af2ec8-e6ac-48d5-8255-6af4687f21e8 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:44:08 functional-035643 crio[9906]: time="2025-12-12T00:44:08.691575Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=e0b0d4f6-b48c-4765-bc0e-dcfd4d36d892 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:44:08 functional-035643 crio[9906]: time="2025-12-12T00:44:08.69204275Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=5eedd5b1-6dbb-4fbd-8ae6-426f470f128b name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:48:14.101986   21341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:48:14.102758   21341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:48:14.104386   21341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:48:14.104868   21341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:48:14.106406   21341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec11 23:45] hrtimer: interrupt took 13740716 ns
	[Dec12 00:10] kauditd_printk_skb: 8 callbacks suppressed
	[Dec12 00:11] overlayfs: idmapped layers are currently not supported
	[  +0.124336] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec12 00:17] overlayfs: idmapped layers are currently not supported
	[Dec12 00:18] overlayfs: idmapped layers are currently not supported
	[Dec12 00:35] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 00:48:14 up  3:30,  0 user,  load average: 0.04, 0.18, 0.46
	Linux functional-035643 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 12 00:48:11 functional-035643 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 00:48:12 functional-035643 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 963.
	Dec 12 00:48:12 functional-035643 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:48:12 functional-035643 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:48:12 functional-035643 kubelet[21215]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 00:48:12 functional-035643 kubelet[21215]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 00:48:12 functional-035643 kubelet[21215]: E1212 00:48:12.288787   21215 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 00:48:12 functional-035643 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 00:48:12 functional-035643 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 00:48:12 functional-035643 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 964.
	Dec 12 00:48:12 functional-035643 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:48:12 functional-035643 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:48:13 functional-035643 kubelet[21236]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 00:48:13 functional-035643 kubelet[21236]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 00:48:13 functional-035643 kubelet[21236]: E1212 00:48:13.030636   21236 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 00:48:13 functional-035643 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 00:48:13 functional-035643 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 00:48:13 functional-035643 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 965.
	Dec 12 00:48:13 functional-035643 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:48:13 functional-035643 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:48:13 functional-035643 kubelet[21259]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 00:48:13 functional-035643 kubelet[21259]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 00:48:13 functional-035643 kubelet[21259]: E1212 00:48:13.768463   21259 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 00:48:13 functional-035643 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 00:48:13 functional-035643 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-035643 -n functional-035643
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-035643 -n functional-035643: exit status 2 (361.030972ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-035643" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (2.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-035643 apply -f testdata/invalidsvc.yaml
functional_test.go:2326: (dbg) Non-zero exit: kubectl --context functional-035643 apply -f testdata/invalidsvc.yaml: exit status 1 (60.910098ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/invalidsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test.go:2328: kubectl --context functional-035643 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (1.72s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-035643 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-035643 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-035643 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-035643 --alsologtostderr -v=1] stderr:
I1212 00:50:35.792483  548326 out.go:360] Setting OutFile to fd 1 ...
I1212 00:50:35.792692  548326 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 00:50:35.792710  548326 out.go:374] Setting ErrFile to fd 2...
I1212 00:50:35.792725  548326 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 00:50:35.792975  548326 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-487723/.minikube/bin
I1212 00:50:35.793241  548326 mustload.go:66] Loading cluster: functional-035643
I1212 00:50:35.793678  548326 config.go:182] Loaded profile config "functional-035643": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1212 00:50:35.794173  548326 cli_runner.go:164] Run: docker container inspect functional-035643 --format={{.State.Status}}
I1212 00:50:35.811115  548326 host.go:66] Checking if "functional-035643" exists ...
I1212 00:50:35.811430  548326 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1212 00:50:35.864325  548326 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 00:50:35.855064935 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1212 00:50:35.864473  548326 api_server.go:166] Checking apiserver status ...
I1212 00:50:35.864555  548326 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1212 00:50:35.864598  548326 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
I1212 00:50:35.881518  548326 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/functional-035643/id_rsa Username:docker}
W1212 00:50:35.987969  548326 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1212 00:50:35.991130  548326 out.go:179] * The control-plane node functional-035643 apiserver is not running: (state=Stopped)
I1212 00:50:35.994018  548326 out.go:179]   To start a cluster, run: "minikube start -p functional-035643"
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-035643
helpers_test.go:244: (dbg) docker inspect functional-035643:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "02b8c8e636a5f9c9930bd279101be257c50eb00805c3c8fd0285e10206ff115a",
	        "Created": "2025-12-12T00:21:16.539894649Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 519641,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T00:21:16.600605162Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/02b8c8e636a5f9c9930bd279101be257c50eb00805c3c8fd0285e10206ff115a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/02b8c8e636a5f9c9930bd279101be257c50eb00805c3c8fd0285e10206ff115a/hostname",
	        "HostsPath": "/var/lib/docker/containers/02b8c8e636a5f9c9930bd279101be257c50eb00805c3c8fd0285e10206ff115a/hosts",
	        "LogPath": "/var/lib/docker/containers/02b8c8e636a5f9c9930bd279101be257c50eb00805c3c8fd0285e10206ff115a/02b8c8e636a5f9c9930bd279101be257c50eb00805c3c8fd0285e10206ff115a-json.log",
	        "Name": "/functional-035643",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-035643:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-035643",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "02b8c8e636a5f9c9930bd279101be257c50eb00805c3c8fd0285e10206ff115a",
	                "LowerDir": "/var/lib/docker/overlay2/fac4f7ad025123b29aaa4f718217dd9d72b542fc9949b8afc2ffc326afc38542-init/diff:/var/lib/docker/overlay2/312acdcca8c5c90ada236fa0dd866f841348e5b8485928af37d3628cccc20197/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fac4f7ad025123b29aaa4f718217dd9d72b542fc9949b8afc2ffc326afc38542/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fac4f7ad025123b29aaa4f718217dd9d72b542fc9949b8afc2ffc326afc38542/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fac4f7ad025123b29aaa4f718217dd9d72b542fc9949b8afc2ffc326afc38542/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-035643",
	                "Source": "/var/lib/docker/volumes/functional-035643/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-035643",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-035643",
	                "name.minikube.sigs.k8s.io": "functional-035643",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ede6a17442d6bf83b8f4c9f93f252345cec3d0406f82de2d6bd2cfd4713e2163",
	            "SandboxKey": "/var/run/docker/netns/ede6a17442d6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33183"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33184"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33187"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33185"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33186"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-035643": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "52:d5:12:89:ea:40",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ad01995b183fdebead6c725e2b942ae8ce2d3964b3552789fe5b50ee7e7239a3",
	                    "EndpointID": "d429a1cd0f840d042af4ad7ea0bda6067a342be7fb552083411004a3604b0124",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-035643",
	                        "02b8c8e636a5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-035643 -n functional-035643
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-035643 -n functional-035643: exit status 2 (330.252998ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-035643 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                        ARGS                                                                         │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ service   │ functional-035643 service --namespace=default --https --url hello-node                                                                              │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │                     │
	│ service   │ functional-035643 service hello-node --url --format={{.IP}}                                                                                         │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │                     │
	│ service   │ functional-035643 service hello-node --url                                                                                                          │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │                     │
	│ mount     │ -p functional-035643 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2456160738/001:/mount-9p --alsologtostderr -v=1              │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │                     │
	│ ssh       │ functional-035643 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │                     │
	│ ssh       │ functional-035643 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │ 12 Dec 25 00:50 UTC │
	│ ssh       │ functional-035643 ssh -- ls -la /mount-9p                                                                                                           │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │ 12 Dec 25 00:50 UTC │
	│ ssh       │ functional-035643 ssh cat /mount-9p/test-1765500627239159423                                                                                        │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │ 12 Dec 25 00:50 UTC │
	│ ssh       │ functional-035643 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                                    │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │                     │
	│ ssh       │ functional-035643 ssh sudo umount -f /mount-9p                                                                                                      │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │ 12 Dec 25 00:50 UTC │
	│ mount     │ -p functional-035643 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3276714616/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │                     │
	│ ssh       │ functional-035643 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │ 12 Dec 25 00:50 UTC │
	│ ssh       │ functional-035643 ssh -- ls -la /mount-9p                                                                                                           │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │ 12 Dec 25 00:50 UTC │
	│ ssh       │ functional-035643 ssh sudo umount -f /mount-9p                                                                                                      │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │                     │
	│ mount     │ -p functional-035643 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2477175357/001:/mount2 --alsologtostderr -v=1                │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │                     │
	│ mount     │ -p functional-035643 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2477175357/001:/mount1 --alsologtostderr -v=1                │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │                     │
	│ mount     │ -p functional-035643 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2477175357/001:/mount3 --alsologtostderr -v=1                │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │                     │
	│ ssh       │ functional-035643 ssh findmnt -T /mount1                                                                                                            │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │ 12 Dec 25 00:50 UTC │
	│ ssh       │ functional-035643 ssh findmnt -T /mount2                                                                                                            │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │ 12 Dec 25 00:50 UTC │
	│ ssh       │ functional-035643 ssh findmnt -T /mount3                                                                                                            │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │ 12 Dec 25 00:50 UTC │
	│ mount     │ -p functional-035643 --kill=true                                                                                                                    │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │                     │
	│ start     │ -p functional-035643 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0       │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │                     │
	│ start     │ -p functional-035643 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0       │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │                     │
	│ start     │ -p functional-035643 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                 │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-035643 --alsologtostderr -v=1                                                                                      │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │                     │
	└───────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 00:50:35
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 00:50:35.555198  548254 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:50:35.555312  548254 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:50:35.555323  548254 out.go:374] Setting ErrFile to fd 2...
	I1212 00:50:35.555329  548254 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:50:35.555588  548254 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-487723/.minikube/bin
	I1212 00:50:35.555945  548254 out.go:368] Setting JSON to false
	I1212 00:50:35.556827  548254 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":12781,"bootTime":1765487855,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1212 00:50:35.556898  548254 start.go:143] virtualization:  
	I1212 00:50:35.560185  548254 out.go:179] * [functional-035643] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 00:50:35.563287  548254 out.go:179]   - MINIKUBE_LOCATION=22101
	I1212 00:50:35.563365  548254 notify.go:221] Checking for updates...
	I1212 00:50:35.569618  548254 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:50:35.572462  548254 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22101-487723/kubeconfig
	I1212 00:50:35.575265  548254 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-487723/.minikube
	I1212 00:50:35.577992  548254 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 00:50:35.580906  548254 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:50:35.584154  548254 config.go:182] Loaded profile config "functional-035643": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 00:50:35.584731  548254 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 00:50:35.616876  548254 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 00:50:35.617061  548254 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:50:35.679448  548254 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 00:50:35.670251588 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 00:50:35.679572  548254 docker.go:319] overlay module found
	I1212 00:50:35.682613  548254 out.go:179] * Using the docker driver based on existing profile
	I1212 00:50:35.685434  548254 start.go:309] selected driver: docker
	I1212 00:50:35.685453  548254 start.go:927] validating driver "docker" against &{Name:functional-035643 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-035643 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:50:35.685551  548254 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:50:35.685664  548254 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:50:35.739314  548254 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 00:50:35.730561685 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 00:50:35.739752  548254 cni.go:84] Creating CNI manager for ""
	I1212 00:50:35.739813  548254 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 00:50:35.739854  548254 start.go:353] cluster config:
	{Name:functional-035643 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-035643 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:50:35.742928  548254 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Dec 12 00:36:02 functional-035643 crio[9906]: time="2025-12-12T00:36:02.167671353Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 12 00:36:02 functional-035643 crio[9906]: time="2025-12-12T00:36:02.167708004Z" level=info msg="Starting seccomp notifier watcher"
	Dec 12 00:36:02 functional-035643 crio[9906]: time="2025-12-12T00:36:02.167752311Z" level=info msg="Create NRI interface"
	Dec 12 00:36:02 functional-035643 crio[9906]: time="2025-12-12T00:36:02.167872177Z" level=info msg="built-in NRI default validator is disabled"
	Dec 12 00:36:02 functional-035643 crio[9906]: time="2025-12-12T00:36:02.167882466Z" level=info msg="runtime interface created"
	Dec 12 00:36:02 functional-035643 crio[9906]: time="2025-12-12T00:36:02.167904357Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 12 00:36:02 functional-035643 crio[9906]: time="2025-12-12T00:36:02.167910363Z" level=info msg="runtime interface starting up..."
	Dec 12 00:36:02 functional-035643 crio[9906]: time="2025-12-12T00:36:02.167919183Z" level=info msg="starting plugins..."
	Dec 12 00:36:02 functional-035643 crio[9906]: time="2025-12-12T00:36:02.167936889Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 12 00:36:02 functional-035643 crio[9906]: time="2025-12-12T00:36:02.168004375Z" level=info msg="No systemd watchdog enabled"
	Dec 12 00:36:02 functional-035643 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 12 00:40:06 functional-035643 crio[9906]: time="2025-12-12T00:40:06.854632207Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=640da022-2edf-494c-a660-79e3ab919eba name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:40:06 functional-035643 crio[9906]: time="2025-12-12T00:40:06.855342483Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=673ecd0d-a1ac-45d5-bb90-3e1f04cdc90f name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:40:06 functional-035643 crio[9906]: time="2025-12-12T00:40:06.855810714Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=c57f39b7-fb58-4f67-bde4-1b55c2187b3f name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:40:06 functional-035643 crio[9906]: time="2025-12-12T00:40:06.856291532Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=a97cf7ab-fcf0-4971-8a79-d2c53b6e4ee5 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:40:06 functional-035643 crio[9906]: time="2025-12-12T00:40:06.856721905Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=01bb9bbf-51cf-478f-81f3-99ec7edffcf4 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:40:06 functional-035643 crio[9906]: time="2025-12-12T00:40:06.857120764Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=272d9706-6818-4f2e-bd33-95134bf8fb23 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:40:06 functional-035643 crio[9906]: time="2025-12-12T00:40:06.857524931Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=50b82acc-740c-444d-8ec5-a3c84ad4b6d2 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:44:08 functional-035643 crio[9906]: time="2025-12-12T00:44:08.688638675Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=4b143437-6a5c-4f02-b714-2d1bb8cb5a7a name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:44:08 functional-035643 crio[9906]: time="2025-12-12T00:44:08.689301235Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=54834de4-a19b-47fe-b478-123a3a9a03c9 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:44:08 functional-035643 crio[9906]: time="2025-12-12T00:44:08.689852011Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=a783bb1e-a84b-4d8e-b3d6-349f1b7407cf name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:44:08 functional-035643 crio[9906]: time="2025-12-12T00:44:08.690318153Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=d1830fa6-0c29-40cb-a67f-5512d68b4fbf name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:44:08 functional-035643 crio[9906]: time="2025-12-12T00:44:08.691052318Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=28af2ec8-e6ac-48d5-8255-6af4687f21e8 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:44:08 functional-035643 crio[9906]: time="2025-12-12T00:44:08.691575Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=e0b0d4f6-b48c-4765-bc0e-dcfd4d36d892 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:44:08 functional-035643 crio[9906]: time="2025-12-12T00:44:08.69204275Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=5eedd5b1-6dbb-4fbd-8ae6-426f470f128b name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:50:37.047411   23523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:50:37.047825   23523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:50:37.049357   23523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:50:37.049666   23523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:50:37.051294   23523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec11 23:45] hrtimer: interrupt took 13740716 ns
	[Dec12 00:10] kauditd_printk_skb: 8 callbacks suppressed
	[Dec12 00:11] overlayfs: idmapped layers are currently not supported
	[  +0.124336] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec12 00:17] overlayfs: idmapped layers are currently not supported
	[Dec12 00:18] overlayfs: idmapped layers are currently not supported
	[Dec12 00:35] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 00:50:37 up  3:33,  0 user,  load average: 1.24, 0.50, 0.52
	Linux functional-035643 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 12 00:50:34 functional-035643 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 00:50:35 functional-035643 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1154.
	Dec 12 00:50:35 functional-035643 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:50:35 functional-035643 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:50:35 functional-035643 kubelet[23406]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 00:50:35 functional-035643 kubelet[23406]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 00:50:35 functional-035643 kubelet[23406]: E1212 00:50:35.533740   23406 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 00:50:35 functional-035643 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 00:50:35 functional-035643 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 00:50:36 functional-035643 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1155.
	Dec 12 00:50:36 functional-035643 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:50:36 functional-035643 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:50:36 functional-035643 kubelet[23420]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 00:50:36 functional-035643 kubelet[23420]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 00:50:36 functional-035643 kubelet[23420]: E1212 00:50:36.292293   23420 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 00:50:36 functional-035643 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 00:50:36 functional-035643 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 00:50:36 functional-035643 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1156.
	Dec 12 00:50:36 functional-035643 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:50:36 functional-035643 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:50:37 functional-035643 kubelet[23516]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 00:50:37 functional-035643 kubelet[23516]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 00:50:37 functional-035643 kubelet[23516]: E1212 00:50:37.029441   23516 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 00:50:37 functional-035643 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 00:50:37 functional-035643 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-035643 -n functional-035643
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-035643 -n functional-035643: exit status 2 (328.301183ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-035643" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (1.72s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (3.21s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-035643 status
functional_test.go:869: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-035643 status: exit status 2 (305.425548ms)

                                                
                                                
-- stdout --
	functional-035643
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
functional_test.go:871: failed to run minikube status. args "out/minikube-linux-arm64 -p functional-035643 status" : exit status 2
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-035643 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:875: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-035643 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 2 (337.436685ms)

                                                
                                                
-- stdout --
	host:Running,kublet:Running,apiserver:Stopped,kubeconfig:Configured

                                                
                                                
-- /stdout --
functional_test.go:877: failed to run minikube status with custom format: args "out/minikube-linux-arm64 -p functional-035643 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 2
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-035643 status -o json
functional_test.go:887: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-035643 status -o json: exit status 2 (296.97383ms)

                                                
                                                
-- stdout --
	{"Name":"functional-035643","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:889: failed to run minikube status with json output. args "out/minikube-linux-arm64 -p functional-035643 status -o json" : exit status 2
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-035643
helpers_test.go:244: (dbg) docker inspect functional-035643:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "02b8c8e636a5f9c9930bd279101be257c50eb00805c3c8fd0285e10206ff115a",
	        "Created": "2025-12-12T00:21:16.539894649Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 519641,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T00:21:16.600605162Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/02b8c8e636a5f9c9930bd279101be257c50eb00805c3c8fd0285e10206ff115a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/02b8c8e636a5f9c9930bd279101be257c50eb00805c3c8fd0285e10206ff115a/hostname",
	        "HostsPath": "/var/lib/docker/containers/02b8c8e636a5f9c9930bd279101be257c50eb00805c3c8fd0285e10206ff115a/hosts",
	        "LogPath": "/var/lib/docker/containers/02b8c8e636a5f9c9930bd279101be257c50eb00805c3c8fd0285e10206ff115a/02b8c8e636a5f9c9930bd279101be257c50eb00805c3c8fd0285e10206ff115a-json.log",
	        "Name": "/functional-035643",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-035643:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-035643",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "02b8c8e636a5f9c9930bd279101be257c50eb00805c3c8fd0285e10206ff115a",
	                "LowerDir": "/var/lib/docker/overlay2/fac4f7ad025123b29aaa4f718217dd9d72b542fc9949b8afc2ffc326afc38542-init/diff:/var/lib/docker/overlay2/312acdcca8c5c90ada236fa0dd866f841348e5b8485928af37d3628cccc20197/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fac4f7ad025123b29aaa4f718217dd9d72b542fc9949b8afc2ffc326afc38542/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fac4f7ad025123b29aaa4f718217dd9d72b542fc9949b8afc2ffc326afc38542/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fac4f7ad025123b29aaa4f718217dd9d72b542fc9949b8afc2ffc326afc38542/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-035643",
	                "Source": "/var/lib/docker/volumes/functional-035643/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-035643",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-035643",
	                "name.minikube.sigs.k8s.io": "functional-035643",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ede6a17442d6bf83b8f4c9f93f252345cec3d0406f82de2d6bd2cfd4713e2163",
	            "SandboxKey": "/var/run/docker/netns/ede6a17442d6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33183"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33184"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33187"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33185"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33186"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-035643": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "52:d5:12:89:ea:40",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ad01995b183fdebead6c725e2b942ae8ce2d3964b3552789fe5b50ee7e7239a3",
	                    "EndpointID": "d429a1cd0f840d042af4ad7ea0bda6067a342be7fb552083411004a3604b0124",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-035643",
	                        "02b8c8e636a5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-035643 -n functional-035643
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-035643 -n functional-035643: exit status 2 (345.674211ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-035643 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                        ARGS                                                                         │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ addons  │ functional-035643 addons list                                                                                                                       │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │ 12 Dec 25 00:50 UTC │
	│ addons  │ functional-035643 addons list -o json                                                                                                               │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │ 12 Dec 25 00:50 UTC │
	│ service │ functional-035643 service list                                                                                                                      │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │                     │
	│ service │ functional-035643 service list -o json                                                                                                              │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │                     │
	│ service │ functional-035643 service --namespace=default --https --url hello-node                                                                              │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │                     │
	│ service │ functional-035643 service hello-node --url --format={{.IP}}                                                                                         │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │                     │
	│ service │ functional-035643 service hello-node --url                                                                                                          │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │                     │
	│ mount   │ -p functional-035643 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2456160738/001:/mount-9p --alsologtostderr -v=1              │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │                     │
	│ ssh     │ functional-035643 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │                     │
	│ ssh     │ functional-035643 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │ 12 Dec 25 00:50 UTC │
	│ ssh     │ functional-035643 ssh -- ls -la /mount-9p                                                                                                           │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │ 12 Dec 25 00:50 UTC │
	│ ssh     │ functional-035643 ssh cat /mount-9p/test-1765500627239159423                                                                                        │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │ 12 Dec 25 00:50 UTC │
	│ ssh     │ functional-035643 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                                    │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │                     │
	│ ssh     │ functional-035643 ssh sudo umount -f /mount-9p                                                                                                      │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │ 12 Dec 25 00:50 UTC │
	│ mount   │ -p functional-035643 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3276714616/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │                     │
	│ ssh     │ functional-035643 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │ 12 Dec 25 00:50 UTC │
	│ ssh     │ functional-035643 ssh -- ls -la /mount-9p                                                                                                           │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │ 12 Dec 25 00:50 UTC │
	│ ssh     │ functional-035643 ssh sudo umount -f /mount-9p                                                                                                      │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │                     │
	│ mount   │ -p functional-035643 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2477175357/001:/mount2 --alsologtostderr -v=1                │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │                     │
	│ mount   │ -p functional-035643 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2477175357/001:/mount1 --alsologtostderr -v=1                │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │                     │
	│ mount   │ -p functional-035643 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2477175357/001:/mount3 --alsologtostderr -v=1                │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │                     │
	│ ssh     │ functional-035643 ssh findmnt -T /mount1                                                                                                            │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │ 12 Dec 25 00:50 UTC │
	│ ssh     │ functional-035643 ssh findmnt -T /mount2                                                                                                            │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │ 12 Dec 25 00:50 UTC │
	│ ssh     │ functional-035643 ssh findmnt -T /mount3                                                                                                            │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │ 12 Dec 25 00:50 UTC │
	│ mount   │ -p functional-035643 --kill=true                                                                                                                    │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 00:35:58
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 00:35:58.676999  530956 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:35:58.677109  530956 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:35:58.677113  530956 out.go:374] Setting ErrFile to fd 2...
	I1212 00:35:58.677117  530956 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:35:58.677347  530956 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-487723/.minikube/bin
	I1212 00:35:58.677686  530956 out.go:368] Setting JSON to false
	I1212 00:35:58.678525  530956 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":11904,"bootTime":1765487855,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1212 00:35:58.678585  530956 start.go:143] virtualization:  
	I1212 00:35:58.682116  530956 out.go:179] * [functional-035643] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 00:35:58.686138  530956 out.go:179]   - MINIKUBE_LOCATION=22101
	I1212 00:35:58.686257  530956 notify.go:221] Checking for updates...
	I1212 00:35:58.691862  530956 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:35:58.694918  530956 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22101-487723/kubeconfig
	I1212 00:35:58.697806  530956 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-487723/.minikube
	I1212 00:35:58.700662  530956 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 00:35:58.703472  530956 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:35:58.706890  530956 config.go:182] Loaded profile config "functional-035643": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 00:35:58.706982  530956 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 00:35:58.735768  530956 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 00:35:58.735882  530956 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:35:58.786774  530956 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-12 00:35:58.777518712 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 00:35:58.786886  530956 docker.go:319] overlay module found
	I1212 00:35:58.790016  530956 out.go:179] * Using the docker driver based on existing profile
	I1212 00:35:58.792828  530956 start.go:309] selected driver: docker
	I1212 00:35:58.792840  530956 start.go:927] validating driver "docker" against &{Name:functional-035643 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-035643 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:35:58.792956  530956 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:35:58.793078  530956 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:35:58.848144  530956 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-12 00:35:58.839160729 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 00:35:58.848551  530956 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 00:35:58.848575  530956 cni.go:84] Creating CNI manager for ""
	I1212 00:35:58.848625  530956 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 00:35:58.848666  530956 start.go:353] cluster config:
	{Name:functional-035643 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-035643 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:35:58.851767  530956 out.go:179] * Starting "functional-035643" primary control-plane node in "functional-035643" cluster
	I1212 00:35:58.854549  530956 cache.go:134] Beginning downloading kic base image for docker with crio
	I1212 00:35:58.857426  530956 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1212 00:35:58.860284  530956 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1212 00:35:58.860323  530956 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22101-487723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1212 00:35:58.860332  530956 cache.go:65] Caching tarball of preloaded images
	I1212 00:35:58.860357  530956 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1212 00:35:58.860418  530956 preload.go:238] Found /home/jenkins/minikube-integration/22101-487723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1212 00:35:58.860426  530956 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1212 00:35:58.860536  530956 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/config.json ...
	I1212 00:35:58.879785  530956 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1212 00:35:58.879795  530956 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1212 00:35:58.879813  530956 cache.go:243] Successfully downloaded all kic artifacts
	I1212 00:35:58.879843  530956 start.go:360] acquireMachinesLock for functional-035643: {Name:mkb0cdc7d354412594dc63c0234fde00134e758d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 00:35:58.879904  530956 start.go:364] duration metric: took 45.603µs to acquireMachinesLock for "functional-035643"
	I1212 00:35:58.879924  530956 start.go:96] Skipping create...Using existing machine configuration
	I1212 00:35:58.879928  530956 fix.go:54] fixHost starting: 
	I1212 00:35:58.880192  530956 cli_runner.go:164] Run: docker container inspect functional-035643 --format={{.State.Status}}
	I1212 00:35:58.897119  530956 fix.go:112] recreateIfNeeded on functional-035643: state=Running err=<nil>
	W1212 00:35:58.897146  530956 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 00:35:58.900349  530956 out.go:252] * Updating the running docker "functional-035643" container ...
	I1212 00:35:58.900378  530956 machine.go:94] provisionDockerMachine start ...
	I1212 00:35:58.900465  530956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
	I1212 00:35:58.917663  530956 main.go:143] libmachine: Using SSH client type: native
	I1212 00:35:58.917980  530956 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1212 00:35:58.917985  530956 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 00:35:59.082110  530956 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-035643
	
	I1212 00:35:59.082124  530956 ubuntu.go:182] provisioning hostname "functional-035643"
	I1212 00:35:59.082187  530956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
	I1212 00:35:59.099710  530956 main.go:143] libmachine: Using SSH client type: native
	I1212 00:35:59.100009  530956 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1212 00:35:59.100017  530956 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-035643 && echo "functional-035643" | sudo tee /etc/hostname
	I1212 00:35:59.259555  530956 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-035643
	
	I1212 00:35:59.259640  530956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
	I1212 00:35:59.277248  530956 main.go:143] libmachine: Using SSH client type: native
	I1212 00:35:59.277556  530956 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1212 00:35:59.277570  530956 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-035643' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-035643/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-035643' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 00:35:59.427001  530956 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 00:35:59.427018  530956 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22101-487723/.minikube CaCertPath:/home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22101-487723/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22101-487723/.minikube}
	I1212 00:35:59.427041  530956 ubuntu.go:190] setting up certificates
	I1212 00:35:59.427057  530956 provision.go:84] configureAuth start
	I1212 00:35:59.427116  530956 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-035643
	I1212 00:35:59.444510  530956 provision.go:143] copyHostCerts
	I1212 00:35:59.444577  530956 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-487723/.minikube/ca.pem, removing ...
	I1212 00:35:59.444584  530956 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-487723/.minikube/ca.pem
	I1212 00:35:59.444656  530956 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22101-487723/.minikube/ca.pem (1078 bytes)
	I1212 00:35:59.444762  530956 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-487723/.minikube/cert.pem, removing ...
	I1212 00:35:59.444766  530956 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-487723/.minikube/cert.pem
	I1212 00:35:59.444790  530956 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22101-487723/.minikube/cert.pem (1123 bytes)
	I1212 00:35:59.444853  530956 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-487723/.minikube/key.pem, removing ...
	I1212 00:35:59.444856  530956 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-487723/.minikube/key.pem
	I1212 00:35:59.444879  530956 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22101-487723/.minikube/key.pem (1679 bytes)
	I1212 00:35:59.444932  530956 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22101-487723/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca-key.pem org=jenkins.functional-035643 san=[127.0.0.1 192.168.49.2 functional-035643 localhost minikube]
	I1212 00:35:59.773887  530956 provision.go:177] copyRemoteCerts
	I1212 00:35:59.773940  530956 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 00:35:59.773979  530956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
	I1212 00:35:59.792006  530956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/functional-035643/id_rsa Username:docker}
	I1212 00:35:59.898459  530956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 00:35:59.916125  530956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 00:35:59.934437  530956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 00:35:59.951804  530956 provision.go:87] duration metric: took 524.726096ms to configureAuth
	I1212 00:35:59.951820  530956 ubuntu.go:206] setting minikube options for container-runtime
	I1212 00:35:59.952018  530956 config.go:182] Loaded profile config "functional-035643": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 00:35:59.952114  530956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
	I1212 00:35:59.968939  530956 main.go:143] libmachine: Using SSH client type: native
	I1212 00:35:59.969228  530956 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1212 00:35:59.969239  530956 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 00:36:00.563754  530956 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 00:36:00.563766  530956 machine.go:97] duration metric: took 1.663381425s to provisionDockerMachine
	I1212 00:36:00.563776  530956 start.go:293] postStartSetup for "functional-035643" (driver="docker")
	I1212 00:36:00.563787  530956 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 00:36:00.563864  530956 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 00:36:00.563909  530956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
	I1212 00:36:00.587628  530956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/functional-035643/id_rsa Username:docker}
	I1212 00:36:00.694584  530956 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 00:36:00.698084  530956 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 00:36:00.698101  530956 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 00:36:00.698111  530956 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-487723/.minikube/addons for local assets ...
	I1212 00:36:00.698167  530956 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-487723/.minikube/files for local assets ...
	I1212 00:36:00.698253  530956 filesync.go:149] local asset: /home/jenkins/minikube-integration/22101-487723/.minikube/files/etc/ssl/certs/4909542.pem -> 4909542.pem in /etc/ssl/certs
	I1212 00:36:00.698337  530956 filesync.go:149] local asset: /home/jenkins/minikube-integration/22101-487723/.minikube/files/etc/test/nested/copy/490954/hosts -> hosts in /etc/test/nested/copy/490954
	I1212 00:36:00.698388  530956 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/490954
	I1212 00:36:00.706001  530956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/files/etc/ssl/certs/4909542.pem --> /etc/ssl/certs/4909542.pem (1708 bytes)
	I1212 00:36:00.723687  530956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/files/etc/test/nested/copy/490954/hosts --> /etc/test/nested/copy/490954/hosts (40 bytes)
	I1212 00:36:00.741785  530956 start.go:296] duration metric: took 177.995516ms for postStartSetup
	I1212 00:36:00.741883  530956 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 00:36:00.741922  530956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
	I1212 00:36:00.760230  530956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/functional-035643/id_rsa Username:docker}
	I1212 00:36:00.864012  530956 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 00:36:00.868713  530956 fix.go:56] duration metric: took 1.988777195s for fixHost
	I1212 00:36:00.868727  530956 start.go:83] releasing machines lock for "functional-035643", held for 1.988815594s
	I1212 00:36:00.868792  530956 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-035643
	I1212 00:36:00.885011  530956 ssh_runner.go:195] Run: cat /version.json
	I1212 00:36:00.885055  530956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
	I1212 00:36:00.885313  530956 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 00:36:00.885366  530956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
	I1212 00:36:00.906879  530956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/functional-035643/id_rsa Username:docker}
	I1212 00:36:00.908992  530956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/functional-035643/id_rsa Username:docker}
	I1212 00:36:01.113200  530956 ssh_runner.go:195] Run: systemctl --version
	I1212 00:36:01.120029  530956 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 00:36:01.159180  530956 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 00:36:01.163912  530956 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 00:36:01.163983  530956 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 00:36:01.172622  530956 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 00:36:01.172636  530956 start.go:496] detecting cgroup driver to use...
	I1212 00:36:01.172680  530956 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 00:36:01.172728  530956 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 00:36:01.189532  530956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 00:36:01.203890  530956 docker.go:218] disabling cri-docker service (if available) ...
	I1212 00:36:01.203963  530956 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 00:36:01.220816  530956 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 00:36:01.234536  530956 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 00:36:01.370158  530956 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 00:36:01.488527  530956 docker.go:234] disabling docker service ...
	I1212 00:36:01.488594  530956 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 00:36:01.503932  530956 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 00:36:01.516796  530956 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 00:36:01.637401  530956 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 00:36:01.761796  530956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 00:36:01.774534  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 00:36:01.788471  530956 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1212 00:36:01.788535  530956 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:36:01.797095  530956 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 00:36:01.797168  530956 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:36:01.806445  530956 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:36:01.815271  530956 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:36:01.824092  530956 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 00:36:01.832291  530956 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:36:01.841209  530956 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:36:01.851179  530956 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:36:01.859893  530956 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 00:36:01.867359  530956 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 00:36:01.874599  530956 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:36:01.993195  530956 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 00:36:02.173735  530956 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 00:36:02.173807  530956 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 00:36:02.177649  530956 start.go:564] Will wait 60s for crictl version
	I1212 00:36:02.177702  530956 ssh_runner.go:195] Run: which crictl
	I1212 00:36:02.181255  530956 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 00:36:02.206520  530956 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1212 00:36:02.206592  530956 ssh_runner.go:195] Run: crio --version
	I1212 00:36:02.236053  530956 ssh_runner.go:195] Run: crio --version
	I1212 00:36:02.270501  530956 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1212 00:36:02.273364  530956 cli_runner.go:164] Run: docker network inspect functional-035643 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 00:36:02.289602  530956 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1212 00:36:02.296412  530956 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1212 00:36:02.299311  530956 kubeadm.go:884] updating cluster {Name:functional-035643 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-035643 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 00:36:02.299467  530956 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1212 00:36:02.299536  530956 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:36:02.337479  530956 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 00:36:02.337493  530956 crio.go:433] Images already preloaded, skipping extraction
	I1212 00:36:02.337550  530956 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:36:02.363122  530956 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 00:36:02.363134  530956 cache_images.go:86] Images are preloaded, skipping loading
	I1212 00:36:02.363141  530956 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1212 00:36:02.363237  530956 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-035643 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-035643 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 00:36:02.363318  530956 ssh_runner.go:195] Run: crio config
	I1212 00:36:02.413513  530956 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1212 00:36:02.413532  530956 cni.go:84] Creating CNI manager for ""
	I1212 00:36:02.413540  530956 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 00:36:02.413548  530956 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 00:36:02.413569  530956 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-035643 NodeName:functional-035643 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfig
Opts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 00:36:02.413686  530956 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-035643"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 00:36:02.413753  530956 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 00:36:02.421266  530956 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 00:36:02.421324  530956 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 00:36:02.428464  530956 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1212 00:36:02.441052  530956 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1212 00:36:02.453157  530956 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2071 bytes)
	I1212 00:36:02.466066  530956 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1212 00:36:02.472532  530956 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:36:02.578480  530956 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:36:02.719058  530956 certs.go:69] Setting up /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643 for IP: 192.168.49.2
	I1212 00:36:02.719069  530956 certs.go:195] generating shared ca certs ...
	I1212 00:36:02.719086  530956 certs.go:227] acquiring lock for ca certs: {Name:mk856824cf2126fa3d2975ef18e195b6ab1234f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:36:02.719283  530956 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22101-487723/.minikube/ca.key
	I1212 00:36:02.719337  530956 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22101-487723/.minikube/proxy-client-ca.key
	I1212 00:36:02.719344  530956 certs.go:257] generating profile certs ...
	I1212 00:36:02.719449  530956 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/client.key
	I1212 00:36:02.719541  530956 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/apiserver.key.8a9a2493
	I1212 00:36:02.719585  530956 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/proxy-client.key
	I1212 00:36:02.719735  530956 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/490954.pem (1338 bytes)
	W1212 00:36:02.719767  530956 certs.go:480] ignoring /home/jenkins/minikube-integration/22101-487723/.minikube/certs/490954_empty.pem, impossibly tiny 0 bytes
	I1212 00:36:02.719779  530956 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca-key.pem (1679 bytes)
	I1212 00:36:02.719809  530956 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca.pem (1078 bytes)
	I1212 00:36:02.719833  530956 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/cert.pem (1123 bytes)
	I1212 00:36:02.719859  530956 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/key.pem (1679 bytes)
	I1212 00:36:02.719902  530956 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-487723/.minikube/files/etc/ssl/certs/4909542.pem (1708 bytes)
	I1212 00:36:02.720656  530956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 00:36:02.742914  530956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 00:36:02.761747  530956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 00:36:02.779250  530956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 00:36:02.796535  530956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1212 00:36:02.813979  530956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 00:36:02.832344  530956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 00:36:02.850165  530956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 00:36:02.867847  530956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/files/etc/ssl/certs/4909542.pem --> /usr/share/ca-certificates/4909542.pem (1708 bytes)
	I1212 00:36:02.887774  530956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 00:36:02.905148  530956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/certs/490954.pem --> /usr/share/ca-certificates/490954.pem (1338 bytes)
	I1212 00:36:02.923137  530956 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 00:36:02.936200  530956 ssh_runner.go:195] Run: openssl version
	I1212 00:36:02.943771  530956 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/490954.pem
	I1212 00:36:02.951677  530956 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/490954.pem /etc/ssl/certs/490954.pem
	I1212 00:36:02.959104  530956 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/490954.pem
	I1212 00:36:02.962881  530956 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 00:21 /usr/share/ca-certificates/490954.pem
	I1212 00:36:02.962937  530956 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/490954.pem
	I1212 00:36:03.006038  530956 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 00:36:03.014202  530956 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4909542.pem
	I1212 00:36:03.022168  530956 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4909542.pem /etc/ssl/certs/4909542.pem
	I1212 00:36:03.030174  530956 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4909542.pem
	I1212 00:36:03.033892  530956 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 00:21 /usr/share/ca-certificates/4909542.pem
	I1212 00:36:03.033949  530956 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4909542.pem
	I1212 00:36:03.075143  530956 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 00:36:03.082587  530956 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:36:03.089740  530956 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 00:36:03.097209  530956 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:36:03.100982  530956 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 00:11 /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:36:03.101039  530956 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:36:03.141961  530956 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 00:36:03.149082  530956 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 00:36:03.152710  530956 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 00:36:03.193308  530956 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 00:36:03.236349  530956 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 00:36:03.279368  530956 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 00:36:03.320758  530956 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 00:36:03.362313  530956 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 00:36:03.403564  530956 kubeadm.go:401] StartCluster: {Name:functional-035643 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-035643 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:36:03.403639  530956 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 00:36:03.403697  530956 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 00:36:03.429883  530956 cri.go:89] found id: ""
	I1212 00:36:03.429959  530956 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 00:36:03.437518  530956 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1212 00:36:03.437528  530956 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1212 00:36:03.437580  530956 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 00:36:03.444705  530956 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:36:03.445211  530956 kubeconfig.go:125] found "functional-035643" server: "https://192.168.49.2:8441"
	I1212 00:36:03.446485  530956 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 00:36:03.453928  530956 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-12 00:21:24.717912452 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-12 00:36:02.461560447 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1212 00:36:03.453947  530956 kubeadm.go:1161] stopping kube-system containers ...
	I1212 00:36:03.453959  530956 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1212 00:36:03.454013  530956 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 00:36:03.481725  530956 cri.go:89] found id: ""
	I1212 00:36:03.481784  530956 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 00:36:03.499216  530956 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 00:36:03.507872  530956 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec 12 00:25 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5640 Dec 12 00:25 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec 12 00:25 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec 12 00:25 /etc/kubernetes/scheduler.conf
	
	I1212 00:36:03.507966  530956 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1212 00:36:03.516663  530956 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1212 00:36:03.524482  530956 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:36:03.524541  530956 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 00:36:03.532121  530956 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1212 00:36:03.539690  530956 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:36:03.539749  530956 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 00:36:03.547386  530956 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1212 00:36:03.555458  530956 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:36:03.555515  530956 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 00:36:03.563050  530956 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 00:36:03.570932  530956 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 00:36:03.615951  530956 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 00:36:05.017170  530956 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.401194576s)
	I1212 00:36:05.017241  530956 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 00:36:05.218047  530956 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 00:36:05.283161  530956 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 00:36:05.326722  530956 api_server.go:52] waiting for apiserver process to appear ...
	I1212 00:36:05.326794  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:05.827661  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:06.327088  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:06.826877  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:07.327696  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:07.827369  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:08.326870  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:08.827729  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:09.327318  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:09.826994  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:10.326971  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:10.827897  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:11.327939  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:11.827793  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:12.327004  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:12.826881  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:13.327847  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:13.827583  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:14.326945  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:14.827753  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:15.327041  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:15.826900  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:16.327889  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:16.827790  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:17.327551  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:17.826998  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:18.326988  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:18.827579  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:19.326992  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:19.827594  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:20.326867  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:20.827772  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:21.327317  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:21.827925  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:22.327001  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:22.826975  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:23.326960  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:23.826929  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:24.327674  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:24.827495  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:25.326930  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:25.827519  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:26.327962  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:26.827658  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:27.327532  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:27.826969  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:28.327685  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:28.827358  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:29.327926  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:29.826958  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:30.327782  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:30.827884  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:31.327105  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:31.826992  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:32.327681  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:32.827190  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:33.327886  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:33.827647  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:34.327016  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:34.827023  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:35.326882  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:35.827613  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:36.327026  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:36.827579  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:37.327817  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:37.827703  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:38.326889  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:38.827613  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:39.326979  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:39.827741  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:40.327124  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:40.827016  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:41.327889  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:41.827782  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:42.327587  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:42.827812  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:43.327751  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:43.826992  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:44.326981  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:44.826901  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:45.327712  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:45.826871  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:46.327774  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:46.827801  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:47.326976  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:47.827799  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:48.326988  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:48.827011  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:49.326914  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:49.826960  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:50.326958  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:50.827662  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:51.326988  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:51.826946  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:52.327667  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:52.827358  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:53.327906  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:53.827656  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:54.327035  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:54.827870  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:55.327685  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:55.827299  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:56.327742  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:56.827766  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:57.327012  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:57.827860  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:58.326990  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:58.827280  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:59.327889  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:59.826878  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:00.327933  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:00.826966  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:01.327339  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:01.827905  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:02.327586  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:02.827346  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:03.326967  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:03.827912  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:04.327657  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:04.827730  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:05.326940  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:37:05.327024  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:37:05.355488  530956 cri.go:89] found id: ""
	I1212 00:37:05.355502  530956 logs.go:282] 0 containers: []
	W1212 00:37:05.355509  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:37:05.355514  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:37:05.355580  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:37:05.379984  530956 cri.go:89] found id: ""
	I1212 00:37:05.379998  530956 logs.go:282] 0 containers: []
	W1212 00:37:05.380005  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:37:05.380010  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:37:05.380068  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:37:05.404986  530956 cri.go:89] found id: ""
	I1212 00:37:05.405001  530956 logs.go:282] 0 containers: []
	W1212 00:37:05.405010  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:37:05.405015  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:37:05.405072  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:37:05.429349  530956 cri.go:89] found id: ""
	I1212 00:37:05.429363  530956 logs.go:282] 0 containers: []
	W1212 00:37:05.429370  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:37:05.429375  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:37:05.429438  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:37:05.453950  530956 cri.go:89] found id: ""
	I1212 00:37:05.453963  530956 logs.go:282] 0 containers: []
	W1212 00:37:05.453970  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:37:05.453975  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:37:05.454030  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:37:05.481105  530956 cri.go:89] found id: ""
	I1212 00:37:05.481118  530956 logs.go:282] 0 containers: []
	W1212 00:37:05.481126  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:37:05.481131  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:37:05.481188  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:37:05.506041  530956 cri.go:89] found id: ""
	I1212 00:37:05.506054  530956 logs.go:282] 0 containers: []
	W1212 00:37:05.506062  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:37:05.506069  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:37:05.506079  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:37:05.575208  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:37:05.575226  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:37:05.602842  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:37:05.602858  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:37:05.674408  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:37:05.674425  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:37:05.688466  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:37:05.688482  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:37:05.756639  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:37:05.748526   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:05.749193   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:05.750701   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:05.751299   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:05.752883   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:37:05.748526   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:05.749193   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:05.750701   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:05.751299   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:05.752883   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:37:08.256849  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:08.268489  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:37:08.268547  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:37:08.294558  530956 cri.go:89] found id: ""
	I1212 00:37:08.294571  530956 logs.go:282] 0 containers: []
	W1212 00:37:08.294578  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:37:08.294583  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:37:08.294647  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:37:08.324264  530956 cri.go:89] found id: ""
	I1212 00:37:08.324277  530956 logs.go:282] 0 containers: []
	W1212 00:37:08.324284  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:37:08.324289  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:37:08.324345  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:37:08.349672  530956 cri.go:89] found id: ""
	I1212 00:37:08.349685  530956 logs.go:282] 0 containers: []
	W1212 00:37:08.349692  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:37:08.349697  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:37:08.349755  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:37:08.375495  530956 cri.go:89] found id: ""
	I1212 00:37:08.375509  530956 logs.go:282] 0 containers: []
	W1212 00:37:08.375516  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:37:08.375521  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:37:08.375579  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:37:08.405282  530956 cri.go:89] found id: ""
	I1212 00:37:08.405305  530956 logs.go:282] 0 containers: []
	W1212 00:37:08.405312  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:37:08.405317  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:37:08.405384  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:37:08.431165  530956 cri.go:89] found id: ""
	I1212 00:37:08.431178  530956 logs.go:282] 0 containers: []
	W1212 00:37:08.431185  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:37:08.431190  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:37:08.431255  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:37:08.456458  530956 cri.go:89] found id: ""
	I1212 00:37:08.456472  530956 logs.go:282] 0 containers: []
	W1212 00:37:08.456479  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:37:08.456487  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:37:08.456498  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:37:08.470633  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:37:08.470647  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:37:08.537226  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:37:08.528672   11079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:08.529056   11079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:08.530703   11079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:08.531301   11079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:08.532944   11079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:37:08.528672   11079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:08.529056   11079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:08.530703   11079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:08.531301   11079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:08.532944   11079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:37:08.537245  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:37:08.537256  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:37:08.606512  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:37:08.606534  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:37:08.634126  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:37:08.634142  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:37:11.201712  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:11.211510  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:37:11.211571  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:37:11.249104  530956 cri.go:89] found id: ""
	I1212 00:37:11.249118  530956 logs.go:282] 0 containers: []
	W1212 00:37:11.249135  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:37:11.249141  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:37:11.249214  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:37:11.285113  530956 cri.go:89] found id: ""
	I1212 00:37:11.285132  530956 logs.go:282] 0 containers: []
	W1212 00:37:11.285143  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:37:11.285148  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:37:11.285218  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:37:11.315788  530956 cri.go:89] found id: ""
	I1212 00:37:11.315802  530956 logs.go:282] 0 containers: []
	W1212 00:37:11.315809  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:37:11.315814  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:37:11.315875  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:37:11.346544  530956 cri.go:89] found id: ""
	I1212 00:37:11.346558  530956 logs.go:282] 0 containers: []
	W1212 00:37:11.346565  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:37:11.346571  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:37:11.346629  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:37:11.376168  530956 cri.go:89] found id: ""
	I1212 00:37:11.376192  530956 logs.go:282] 0 containers: []
	W1212 00:37:11.376199  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:37:11.376205  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:37:11.376274  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:37:11.401416  530956 cri.go:89] found id: ""
	I1212 00:37:11.401430  530956 logs.go:282] 0 containers: []
	W1212 00:37:11.401437  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:37:11.401442  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:37:11.401501  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:37:11.426005  530956 cri.go:89] found id: ""
	I1212 00:37:11.426019  530956 logs.go:282] 0 containers: []
	W1212 00:37:11.426026  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:37:11.426034  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:37:11.426044  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:37:11.440817  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:37:11.440832  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:37:11.505805  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:37:11.496652   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:11.497359   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:11.499136   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:11.499679   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:11.501366   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:37:11.496652   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:11.497359   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:11.499136   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:11.499679   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:11.501366   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:37:11.505819  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:37:11.505832  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:37:11.581171  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:37:11.581192  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:37:11.614667  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:37:11.614699  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:37:14.182453  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:14.192683  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:37:14.192743  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:37:14.224011  530956 cri.go:89] found id: ""
	I1212 00:37:14.224025  530956 logs.go:282] 0 containers: []
	W1212 00:37:14.224032  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:37:14.224037  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:37:14.224097  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:37:14.253937  530956 cri.go:89] found id: ""
	I1212 00:37:14.253951  530956 logs.go:282] 0 containers: []
	W1212 00:37:14.253958  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:37:14.253963  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:37:14.254034  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:37:14.291025  530956 cri.go:89] found id: ""
	I1212 00:37:14.291039  530956 logs.go:282] 0 containers: []
	W1212 00:37:14.291047  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:37:14.291057  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:37:14.291117  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:37:14.318045  530956 cri.go:89] found id: ""
	I1212 00:37:14.318059  530956 logs.go:282] 0 containers: []
	W1212 00:37:14.318066  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:37:14.318072  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:37:14.318133  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:37:14.345053  530956 cri.go:89] found id: ""
	I1212 00:37:14.345074  530956 logs.go:282] 0 containers: []
	W1212 00:37:14.345082  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:37:14.345087  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:37:14.345151  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:37:14.370315  530956 cri.go:89] found id: ""
	I1212 00:37:14.370328  530956 logs.go:282] 0 containers: []
	W1212 00:37:14.370335  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:37:14.370340  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:37:14.370397  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:37:14.400128  530956 cri.go:89] found id: ""
	I1212 00:37:14.400142  530956 logs.go:282] 0 containers: []
	W1212 00:37:14.400149  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:37:14.400156  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:37:14.400166  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:37:14.469510  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:37:14.469528  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:37:14.497946  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:37:14.497962  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:37:14.567259  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:37:14.567276  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:37:14.581753  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:37:14.581768  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:37:14.649334  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:37:14.641152   11304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:14.642071   11304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:14.643581   11304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:14.644076   11304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:14.645435   11304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:37:14.641152   11304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:14.642071   11304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:14.643581   11304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:14.644076   11304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:14.645435   11304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:37:17.151022  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:17.161375  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:37:17.161433  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:37:17.187128  530956 cri.go:89] found id: ""
	I1212 00:37:17.187144  530956 logs.go:282] 0 containers: []
	W1212 00:37:17.187151  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:37:17.187157  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:37:17.187224  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:37:17.212545  530956 cri.go:89] found id: ""
	I1212 00:37:17.212560  530956 logs.go:282] 0 containers: []
	W1212 00:37:17.212567  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:37:17.212573  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:37:17.212632  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:37:17.239817  530956 cri.go:89] found id: ""
	I1212 00:37:17.239831  530956 logs.go:282] 0 containers: []
	W1212 00:37:17.239838  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:37:17.239843  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:37:17.239900  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:37:17.267133  530956 cri.go:89] found id: ""
	I1212 00:37:17.267147  530956 logs.go:282] 0 containers: []
	W1212 00:37:17.267155  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:37:17.267160  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:37:17.267232  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:37:17.304534  530956 cri.go:89] found id: ""
	I1212 00:37:17.304548  530956 logs.go:282] 0 containers: []
	W1212 00:37:17.304554  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:37:17.304559  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:37:17.304618  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:37:17.330052  530956 cri.go:89] found id: ""
	I1212 00:37:17.330066  530956 logs.go:282] 0 containers: []
	W1212 00:37:17.330073  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:37:17.330078  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:37:17.330133  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:37:17.354652  530956 cri.go:89] found id: ""
	I1212 00:37:17.354671  530956 logs.go:282] 0 containers: []
	W1212 00:37:17.354678  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:37:17.354705  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:37:17.354715  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:37:17.421755  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:37:17.412804   11395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:17.413382   11395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:17.415079   11395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:17.415827   11395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:17.417552   11395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:37:17.412804   11395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:17.413382   11395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:17.415079   11395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:17.415827   11395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:17.417552   11395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:37:17.421766  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:37:17.421779  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:37:17.496810  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:37:17.496835  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:37:17.525867  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:37:17.525886  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:37:17.594454  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:37:17.594475  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:37:20.109774  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:20.119858  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:37:20.119916  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:37:20.148052  530956 cri.go:89] found id: ""
	I1212 00:37:20.148066  530956 logs.go:282] 0 containers: []
	W1212 00:37:20.148073  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:37:20.148078  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:37:20.148138  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:37:20.172308  530956 cri.go:89] found id: ""
	I1212 00:37:20.172322  530956 logs.go:282] 0 containers: []
	W1212 00:37:20.172329  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:37:20.172334  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:37:20.172392  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:37:20.200721  530956 cri.go:89] found id: ""
	I1212 00:37:20.200735  530956 logs.go:282] 0 containers: []
	W1212 00:37:20.200743  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:37:20.200748  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:37:20.200807  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:37:20.232123  530956 cri.go:89] found id: ""
	I1212 00:37:20.232136  530956 logs.go:282] 0 containers: []
	W1212 00:37:20.232143  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:37:20.232148  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:37:20.232207  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:37:20.263625  530956 cri.go:89] found id: ""
	I1212 00:37:20.263638  530956 logs.go:282] 0 containers: []
	W1212 00:37:20.263646  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:37:20.263651  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:37:20.263710  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:37:20.292234  530956 cri.go:89] found id: ""
	I1212 00:37:20.292248  530956 logs.go:282] 0 containers: []
	W1212 00:37:20.292255  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:37:20.292260  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:37:20.292319  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:37:20.316784  530956 cri.go:89] found id: ""
	I1212 00:37:20.316798  530956 logs.go:282] 0 containers: []
	W1212 00:37:20.316804  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:37:20.316812  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:37:20.316822  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:37:20.382530  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:37:20.382550  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:37:20.397572  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:37:20.397587  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:37:20.462516  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:37:20.453480   11508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:20.454137   11508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:20.455857   11508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:20.456349   11508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:20.458004   11508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:37:20.453480   11508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:20.454137   11508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:20.455857   11508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:20.456349   11508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:20.458004   11508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:37:20.462526  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:37:20.462536  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:37:20.536302  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:37:20.536323  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:37:23.067516  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:23.077747  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:37:23.077816  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:37:23.102753  530956 cri.go:89] found id: ""
	I1212 00:37:23.102767  530956 logs.go:282] 0 containers: []
	W1212 00:37:23.102774  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:37:23.102780  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:37:23.102845  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:37:23.128706  530956 cri.go:89] found id: ""
	I1212 00:37:23.128719  530956 logs.go:282] 0 containers: []
	W1212 00:37:23.128727  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:37:23.128732  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:37:23.128792  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:37:23.154481  530956 cri.go:89] found id: ""
	I1212 00:37:23.154495  530956 logs.go:282] 0 containers: []
	W1212 00:37:23.154502  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:37:23.154507  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:37:23.154572  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:37:23.179609  530956 cri.go:89] found id: ""
	I1212 00:37:23.179622  530956 logs.go:282] 0 containers: []
	W1212 00:37:23.179630  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:37:23.179635  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:37:23.179699  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:37:23.205151  530956 cri.go:89] found id: ""
	I1212 00:37:23.205165  530956 logs.go:282] 0 containers: []
	W1212 00:37:23.205172  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:37:23.205177  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:37:23.205238  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:37:23.242297  530956 cri.go:89] found id: ""
	I1212 00:37:23.242312  530956 logs.go:282] 0 containers: []
	W1212 00:37:23.242319  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:37:23.242324  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:37:23.242393  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:37:23.271432  530956 cri.go:89] found id: ""
	I1212 00:37:23.271446  530956 logs.go:282] 0 containers: []
	W1212 00:37:23.271453  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:37:23.271461  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:37:23.271472  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:37:23.339885  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:37:23.339904  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:37:23.355098  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:37:23.355115  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:37:23.419229  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:37:23.410980   11610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:23.411565   11610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:23.413072   11610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:23.413369   11610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:23.415026   11610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:37:23.410980   11610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:23.411565   11610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:23.413072   11610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:23.413369   11610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:23.415026   11610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:37:23.419240  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:37:23.419250  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:37:23.486458  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:37:23.486478  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:37:26.021866  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:26.032710  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:37:26.032772  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:37:26.058774  530956 cri.go:89] found id: ""
	I1212 00:37:26.058811  530956 logs.go:282] 0 containers: []
	W1212 00:37:26.058818  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:37:26.058824  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:37:26.058887  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:37:26.084731  530956 cri.go:89] found id: ""
	I1212 00:37:26.084746  530956 logs.go:282] 0 containers: []
	W1212 00:37:26.084753  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:37:26.084758  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:37:26.084821  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:37:26.110515  530956 cri.go:89] found id: ""
	I1212 00:37:26.110529  530956 logs.go:282] 0 containers: []
	W1212 00:37:26.110536  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:37:26.110541  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:37:26.110598  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:37:26.137082  530956 cri.go:89] found id: ""
	I1212 00:37:26.137095  530956 logs.go:282] 0 containers: []
	W1212 00:37:26.137103  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:37:26.137112  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:37:26.137172  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:37:26.162724  530956 cri.go:89] found id: ""
	I1212 00:37:26.162738  530956 logs.go:282] 0 containers: []
	W1212 00:37:26.162745  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:37:26.162751  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:37:26.162818  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:37:26.188538  530956 cri.go:89] found id: ""
	I1212 00:37:26.188559  530956 logs.go:282] 0 containers: []
	W1212 00:37:26.188566  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:37:26.188571  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:37:26.188630  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:37:26.219848  530956 cri.go:89] found id: ""
	I1212 00:37:26.219862  530956 logs.go:282] 0 containers: []
	W1212 00:37:26.219869  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:37:26.219876  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:37:26.219887  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:37:26.291444  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:37:26.291463  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:37:26.306938  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:37:26.306954  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:37:26.368571  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:37:26.360215   11716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:26.360983   11716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:26.362459   11716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:26.362990   11716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:26.364656   11716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:37:26.360215   11716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:26.360983   11716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:26.362459   11716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:26.362990   11716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:26.364656   11716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:37:26.368581  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:37:26.368593  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:37:26.436229  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:37:26.436247  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:37:28.966999  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:28.976928  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:37:28.976991  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:37:29.003108  530956 cri.go:89] found id: ""
	I1212 00:37:29.003123  530956 logs.go:282] 0 containers: []
	W1212 00:37:29.003130  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:37:29.003136  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:37:29.003212  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:37:29.028803  530956 cri.go:89] found id: ""
	I1212 00:37:29.028817  530956 logs.go:282] 0 containers: []
	W1212 00:37:29.028824  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:37:29.028828  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:37:29.028885  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:37:29.056738  530956 cri.go:89] found id: ""
	I1212 00:37:29.056758  530956 logs.go:282] 0 containers: []
	W1212 00:37:29.056765  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:37:29.056770  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:37:29.056828  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:37:29.081270  530956 cri.go:89] found id: ""
	I1212 00:37:29.081284  530956 logs.go:282] 0 containers: []
	W1212 00:37:29.081291  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:37:29.081297  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:37:29.081354  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:37:29.106545  530956 cri.go:89] found id: ""
	I1212 00:37:29.106559  530956 logs.go:282] 0 containers: []
	W1212 00:37:29.106566  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:37:29.106571  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:37:29.106629  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:37:29.133248  530956 cri.go:89] found id: ""
	I1212 00:37:29.133262  530956 logs.go:282] 0 containers: []
	W1212 00:37:29.133270  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:37:29.133275  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:37:29.133335  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:37:29.162606  530956 cri.go:89] found id: ""
	I1212 00:37:29.162620  530956 logs.go:282] 0 containers: []
	W1212 00:37:29.162627  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:37:29.162634  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:37:29.162645  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:37:29.228360  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:37:29.228380  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:37:29.244576  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:37:29.244593  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:37:29.318498  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:37:29.310629   11815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:29.311117   11815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:29.312690   11815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:29.313032   11815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:29.314613   11815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:37:29.310629   11815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:29.311117   11815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:29.312690   11815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:29.313032   11815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:29.314613   11815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:37:29.318508  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:37:29.318519  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:37:29.386989  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:37:29.387009  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:37:31.922335  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:31.932487  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:37:31.932555  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:37:31.958330  530956 cri.go:89] found id: ""
	I1212 00:37:31.958344  530956 logs.go:282] 0 containers: []
	W1212 00:37:31.958351  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:37:31.958356  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:37:31.958413  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:37:31.986166  530956 cri.go:89] found id: ""
	I1212 00:37:31.986184  530956 logs.go:282] 0 containers: []
	W1212 00:37:31.986193  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:37:31.986198  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:37:31.986263  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:37:32.018215  530956 cri.go:89] found id: ""
	I1212 00:37:32.018229  530956 logs.go:282] 0 containers: []
	W1212 00:37:32.018236  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:37:32.018241  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:37:32.018309  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:37:32.045496  530956 cri.go:89] found id: ""
	I1212 00:37:32.045510  530956 logs.go:282] 0 containers: []
	W1212 00:37:32.045526  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:37:32.045531  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:37:32.045599  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:37:32.071713  530956 cri.go:89] found id: ""
	I1212 00:37:32.071727  530956 logs.go:282] 0 containers: []
	W1212 00:37:32.071733  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:37:32.071748  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:37:32.071809  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:37:32.096398  530956 cri.go:89] found id: ""
	I1212 00:37:32.096412  530956 logs.go:282] 0 containers: []
	W1212 00:37:32.096419  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:37:32.096424  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:37:32.096481  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:37:32.121995  530956 cri.go:89] found id: ""
	I1212 00:37:32.122009  530956 logs.go:282] 0 containers: []
	W1212 00:37:32.122016  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:37:32.122024  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:37:32.122033  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:37:32.187537  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:37:32.187556  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:37:32.202073  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:37:32.202088  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:37:32.283678  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:37:32.269254   11915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:32.269661   11915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:32.271076   11915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:32.271701   11915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:32.275343   11915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:37:32.269254   11915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:32.269661   11915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:32.271076   11915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:32.271701   11915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:32.275343   11915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:37:32.283688  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:37:32.283699  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:37:32.352426  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:37:32.352446  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:37:34.887315  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:34.897374  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:37:34.897440  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:37:34.922626  530956 cri.go:89] found id: ""
	I1212 00:37:34.922641  530956 logs.go:282] 0 containers: []
	W1212 00:37:34.922648  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:37:34.922654  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:37:34.922741  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:37:34.948176  530956 cri.go:89] found id: ""
	I1212 00:37:34.948190  530956 logs.go:282] 0 containers: []
	W1212 00:37:34.948199  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:37:34.948204  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:37:34.948302  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:37:34.975855  530956 cri.go:89] found id: ""
	I1212 00:37:34.975869  530956 logs.go:282] 0 containers: []
	W1212 00:37:34.975883  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:37:34.975889  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:37:34.975954  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:37:35.008030  530956 cri.go:89] found id: ""
	I1212 00:37:35.008046  530956 logs.go:282] 0 containers: []
	W1212 00:37:35.008054  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:37:35.008060  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:37:35.008144  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:37:35.033803  530956 cri.go:89] found id: ""
	I1212 00:37:35.033816  530956 logs.go:282] 0 containers: []
	W1212 00:37:35.033823  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:37:35.033828  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:37:35.033887  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:37:35.059521  530956 cri.go:89] found id: ""
	I1212 00:37:35.059535  530956 logs.go:282] 0 containers: []
	W1212 00:37:35.059542  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:37:35.059547  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:37:35.059604  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:37:35.084378  530956 cri.go:89] found id: ""
	I1212 00:37:35.084392  530956 logs.go:282] 0 containers: []
	W1212 00:37:35.084399  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:37:35.084406  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:37:35.084416  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:37:35.150144  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:37:35.150166  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:37:35.164295  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:37:35.164311  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:37:35.237720  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:37:35.229555   12019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:35.230202   12019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:35.231874   12019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:35.232277   12019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:35.233798   12019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:37:35.229555   12019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:35.230202   12019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:35.231874   12019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:35.232277   12019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:35.233798   12019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:37:35.237730  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:37:35.237740  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:37:35.309700  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:37:35.309721  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:37:37.842191  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:37.852127  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:37:37.852198  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:37:37.883852  530956 cri.go:89] found id: ""
	I1212 00:37:37.883866  530956 logs.go:282] 0 containers: []
	W1212 00:37:37.883873  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:37:37.883879  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:37:37.883940  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:37:37.908974  530956 cri.go:89] found id: ""
	I1212 00:37:37.908988  530956 logs.go:282] 0 containers: []
	W1212 00:37:37.908995  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:37:37.909000  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:37:37.909058  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:37:37.934558  530956 cri.go:89] found id: ""
	I1212 00:37:37.934581  530956 logs.go:282] 0 containers: []
	W1212 00:37:37.934588  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:37:37.934593  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:37:37.934659  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:37:37.960620  530956 cri.go:89] found id: ""
	I1212 00:37:37.960634  530956 logs.go:282] 0 containers: []
	W1212 00:37:37.960641  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:37:37.960653  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:37:37.960716  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:37:37.985545  530956 cri.go:89] found id: ""
	I1212 00:37:37.985559  530956 logs.go:282] 0 containers: []
	W1212 00:37:37.985566  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:37:37.985571  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:37:37.985649  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:37:38.019481  530956 cri.go:89] found id: ""
	I1212 00:37:38.019496  530956 logs.go:282] 0 containers: []
	W1212 00:37:38.019511  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:37:38.019517  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:37:38.019587  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:37:38.050591  530956 cri.go:89] found id: ""
	I1212 00:37:38.050606  530956 logs.go:282] 0 containers: []
	W1212 00:37:38.050613  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:37:38.050621  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:37:38.050631  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:37:38.118052  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:37:38.118073  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:37:38.133136  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:37:38.133152  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:37:38.195824  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:37:38.187464   12120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:38.188136   12120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:38.189863   12120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:38.190376   12120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:38.191908   12120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:37:38.187464   12120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:38.188136   12120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:38.189863   12120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:38.190376   12120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:38.191908   12120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:37:38.195836  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:37:38.195847  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:37:38.277789  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:37:38.277816  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:37:40.807649  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:40.817759  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:37:40.817820  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:37:40.843061  530956 cri.go:89] found id: ""
	I1212 00:37:40.843075  530956 logs.go:282] 0 containers: []
	W1212 00:37:40.843082  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:37:40.843087  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:37:40.843147  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:37:40.867922  530956 cri.go:89] found id: ""
	I1212 00:37:40.867936  530956 logs.go:282] 0 containers: []
	W1212 00:37:40.867944  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:37:40.867949  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:37:40.868005  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:37:40.892630  530956 cri.go:89] found id: ""
	I1212 00:37:40.892644  530956 logs.go:282] 0 containers: []
	W1212 00:37:40.892653  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:37:40.892657  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:37:40.892716  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:37:40.918166  530956 cri.go:89] found id: ""
	I1212 00:37:40.918180  530956 logs.go:282] 0 containers: []
	W1212 00:37:40.918187  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:37:40.918192  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:37:40.918250  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:37:40.944075  530956 cri.go:89] found id: ""
	I1212 00:37:40.944088  530956 logs.go:282] 0 containers: []
	W1212 00:37:40.944095  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:37:40.944100  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:37:40.944160  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:37:40.969320  530956 cri.go:89] found id: ""
	I1212 00:37:40.969333  530956 logs.go:282] 0 containers: []
	W1212 00:37:40.969340  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:37:40.969346  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:37:40.969405  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:37:40.997473  530956 cri.go:89] found id: ""
	I1212 00:37:40.997487  530956 logs.go:282] 0 containers: []
	W1212 00:37:40.997494  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:37:40.997501  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:37:40.997512  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:37:41.028728  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:37:41.028743  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:37:41.095087  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:37:41.095107  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:37:41.109485  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:37:41.109501  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:37:41.176844  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:37:41.166571   12234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:41.167470   12234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:41.170826   12234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:41.171336   12234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:41.172874   12234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:37:41.166571   12234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:41.167470   12234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:41.170826   12234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:41.171336   12234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:41.172874   12234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:37:41.176853  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:37:41.176864  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:37:43.749966  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:43.760058  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:37:43.760118  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:37:43.785533  530956 cri.go:89] found id: ""
	I1212 00:37:43.785546  530956 logs.go:282] 0 containers: []
	W1212 00:37:43.785554  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:37:43.785559  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:37:43.785616  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:37:43.812938  530956 cri.go:89] found id: ""
	I1212 00:37:43.812952  530956 logs.go:282] 0 containers: []
	W1212 00:37:43.812960  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:37:43.812964  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:37:43.813029  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:37:43.838583  530956 cri.go:89] found id: ""
	I1212 00:37:43.838596  530956 logs.go:282] 0 containers: []
	W1212 00:37:43.838604  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:37:43.838609  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:37:43.838669  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:37:43.864548  530956 cri.go:89] found id: ""
	I1212 00:37:43.864562  530956 logs.go:282] 0 containers: []
	W1212 00:37:43.864569  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:37:43.864574  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:37:43.864633  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:37:43.889391  530956 cri.go:89] found id: ""
	I1212 00:37:43.889405  530956 logs.go:282] 0 containers: []
	W1212 00:37:43.889412  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:37:43.889417  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:37:43.889478  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:37:43.914183  530956 cri.go:89] found id: ""
	I1212 00:37:43.914196  530956 logs.go:282] 0 containers: []
	W1212 00:37:43.914203  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:37:43.914209  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:37:43.914268  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:37:43.941097  530956 cri.go:89] found id: ""
	I1212 00:37:43.941112  530956 logs.go:282] 0 containers: []
	W1212 00:37:43.941119  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:37:43.941126  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:37:43.941136  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:37:44.007607  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:37:44.007625  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:37:44.022976  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:37:44.022993  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:37:44.087167  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:37:44.078521   12330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:44.079166   12330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:44.080973   12330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:44.081418   12330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:44.083213   12330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:37:44.078521   12330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:44.079166   12330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:44.080973   12330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:44.081418   12330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:44.083213   12330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:37:44.087177  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:37:44.087190  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:37:44.156045  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:37:44.156065  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:37:46.684537  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:46.694320  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:37:46.694383  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:37:46.718727  530956 cri.go:89] found id: ""
	I1212 00:37:46.718741  530956 logs.go:282] 0 containers: []
	W1212 00:37:46.718751  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:37:46.718756  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:37:46.718832  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:37:46.744753  530956 cri.go:89] found id: ""
	I1212 00:37:46.744767  530956 logs.go:282] 0 containers: []
	W1212 00:37:46.744774  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:37:46.744779  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:37:46.744838  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:37:46.773525  530956 cri.go:89] found id: ""
	I1212 00:37:46.773538  530956 logs.go:282] 0 containers: []
	W1212 00:37:46.773546  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:37:46.773551  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:37:46.773608  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:37:46.798518  530956 cri.go:89] found id: ""
	I1212 00:37:46.798532  530956 logs.go:282] 0 containers: []
	W1212 00:37:46.798539  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:37:46.798544  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:37:46.798602  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:37:46.822867  530956 cri.go:89] found id: ""
	I1212 00:37:46.822880  530956 logs.go:282] 0 containers: []
	W1212 00:37:46.822887  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:37:46.822893  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:37:46.822949  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:37:46.849825  530956 cri.go:89] found id: ""
	I1212 00:37:46.849839  530956 logs.go:282] 0 containers: []
	W1212 00:37:46.849846  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:37:46.849851  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:37:46.849909  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:37:46.874986  530956 cri.go:89] found id: ""
	I1212 00:37:46.874999  530956 logs.go:282] 0 containers: []
	W1212 00:37:46.875011  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:37:46.875019  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:37:46.875030  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:37:46.939887  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:37:46.931753   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:46.932307   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:46.933826   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:46.934346   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:46.936019   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:37:46.931753   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:46.932307   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:46.933826   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:46.934346   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:46.936019   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:37:46.939896  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:37:46.939909  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:37:47.008024  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:37:47.008044  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:37:47.036373  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:37:47.036388  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:37:47.101329  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:37:47.101347  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:37:49.616038  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:49.626178  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:37:49.626240  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:37:49.652682  530956 cri.go:89] found id: ""
	I1212 00:37:49.652696  530956 logs.go:282] 0 containers: []
	W1212 00:37:49.652703  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:37:49.652708  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:37:49.652766  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:37:49.679170  530956 cri.go:89] found id: ""
	I1212 00:37:49.679185  530956 logs.go:282] 0 containers: []
	W1212 00:37:49.679191  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:37:49.679197  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:37:49.679256  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:37:49.706504  530956 cri.go:89] found id: ""
	I1212 00:37:49.706518  530956 logs.go:282] 0 containers: []
	W1212 00:37:49.706526  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:37:49.706532  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:37:49.706592  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:37:49.732201  530956 cri.go:89] found id: ""
	I1212 00:37:49.732215  530956 logs.go:282] 0 containers: []
	W1212 00:37:49.732222  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:37:49.732227  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:37:49.732287  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:37:49.757094  530956 cri.go:89] found id: ""
	I1212 00:37:49.757107  530956 logs.go:282] 0 containers: []
	W1212 00:37:49.757115  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:37:49.757119  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:37:49.757178  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:37:49.785367  530956 cri.go:89] found id: ""
	I1212 00:37:49.785382  530956 logs.go:282] 0 containers: []
	W1212 00:37:49.785391  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:37:49.785396  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:37:49.785466  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:37:49.809132  530956 cri.go:89] found id: ""
	I1212 00:37:49.809145  530956 logs.go:282] 0 containers: []
	W1212 00:37:49.809152  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:37:49.809160  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:37:49.809171  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:37:49.874272  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:37:49.874291  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:37:49.888851  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:37:49.888866  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:37:49.954139  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:37:49.945852   12539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:49.946386   12539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:49.948180   12539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:49.948551   12539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:49.950144   12539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:37:49.945852   12539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:49.946386   12539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:49.948180   12539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:49.948551   12539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:49.950144   12539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:37:49.954152  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:37:49.954164  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:37:50.021343  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:37:50.021364  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:37:52.550858  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:52.560788  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:37:52.560857  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:37:52.589542  530956 cri.go:89] found id: ""
	I1212 00:37:52.589556  530956 logs.go:282] 0 containers: []
	W1212 00:37:52.589563  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:37:52.589568  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:37:52.589629  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:37:52.613111  530956 cri.go:89] found id: ""
	I1212 00:37:52.613124  530956 logs.go:282] 0 containers: []
	W1212 00:37:52.613131  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:37:52.613136  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:37:52.613195  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:37:52.637059  530956 cri.go:89] found id: ""
	I1212 00:37:52.637072  530956 logs.go:282] 0 containers: []
	W1212 00:37:52.637079  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:37:52.637084  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:37:52.637142  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:37:52.661402  530956 cri.go:89] found id: ""
	I1212 00:37:52.661415  530956 logs.go:282] 0 containers: []
	W1212 00:37:52.661422  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:37:52.661428  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:37:52.661485  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:37:52.686208  530956 cri.go:89] found id: ""
	I1212 00:37:52.686221  530956 logs.go:282] 0 containers: []
	W1212 00:37:52.686228  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:37:52.686234  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:37:52.686292  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:37:52.714239  530956 cri.go:89] found id: ""
	I1212 00:37:52.714257  530956 logs.go:282] 0 containers: []
	W1212 00:37:52.714272  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:37:52.714281  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:37:52.714360  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:37:52.738849  530956 cri.go:89] found id: ""
	I1212 00:37:52.738862  530956 logs.go:282] 0 containers: []
	W1212 00:37:52.738871  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:37:52.738878  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:37:52.738889  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:37:52.805309  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:37:52.796653   12638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:52.797158   12638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:52.798767   12638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:52.799405   12638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:52.800977   12638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:37:52.796653   12638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:52.797158   12638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:52.798767   12638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:52.799405   12638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:52.800977   12638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:37:52.805318  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:37:52.805329  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:37:52.873118  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:37:52.873138  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:37:52.901072  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:37:52.901088  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:37:52.967085  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:37:52.967104  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:37:55.482800  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:55.493703  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:37:55.493761  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:37:55.527575  530956 cri.go:89] found id: ""
	I1212 00:37:55.527588  530956 logs.go:282] 0 containers: []
	W1212 00:37:55.527595  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:37:55.527601  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:37:55.527663  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:37:55.552177  530956 cri.go:89] found id: ""
	I1212 00:37:55.552191  530956 logs.go:282] 0 containers: []
	W1212 00:37:55.552198  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:37:55.552203  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:37:55.552264  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:37:55.576968  530956 cri.go:89] found id: ""
	I1212 00:37:55.576981  530956 logs.go:282] 0 containers: []
	W1212 00:37:55.576988  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:37:55.576993  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:37:55.577054  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:37:55.603212  530956 cri.go:89] found id: ""
	I1212 00:37:55.603225  530956 logs.go:282] 0 containers: []
	W1212 00:37:55.603232  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:37:55.603237  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:37:55.603300  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:37:55.629922  530956 cri.go:89] found id: ""
	I1212 00:37:55.629936  530956 logs.go:282] 0 containers: []
	W1212 00:37:55.629943  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:37:55.629949  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:37:55.630009  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:37:55.659450  530956 cri.go:89] found id: ""
	I1212 00:37:55.659469  530956 logs.go:282] 0 containers: []
	W1212 00:37:55.659476  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:37:55.659482  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:37:55.659540  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:37:55.683953  530956 cri.go:89] found id: ""
	I1212 00:37:55.683967  530956 logs.go:282] 0 containers: []
	W1212 00:37:55.683974  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:37:55.683981  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:37:55.683991  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:37:55.752000  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:37:55.752019  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:37:55.781847  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:37:55.781863  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:37:55.846599  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:37:55.846617  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:37:55.861470  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:37:55.861487  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:37:55.927422  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:37:55.918837   12763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:55.919637   12763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:55.921344   12763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:55.921624   12763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:55.923175   12763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:37:55.918837   12763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:55.919637   12763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:55.921344   12763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:55.921624   12763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:55.923175   12763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:37:58.429107  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:58.438890  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:37:58.438951  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:37:58.463332  530956 cri.go:89] found id: ""
	I1212 00:37:58.463346  530956 logs.go:282] 0 containers: []
	W1212 00:37:58.463353  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:37:58.463358  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:37:58.463420  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:37:58.502844  530956 cri.go:89] found id: ""
	I1212 00:37:58.502859  530956 logs.go:282] 0 containers: []
	W1212 00:37:58.502866  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:37:58.502871  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:37:58.502934  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:37:58.535191  530956 cri.go:89] found id: ""
	I1212 00:37:58.535204  530956 logs.go:282] 0 containers: []
	W1212 00:37:58.535211  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:37:58.535216  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:37:58.535275  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:37:58.560276  530956 cri.go:89] found id: ""
	I1212 00:37:58.560290  530956 logs.go:282] 0 containers: []
	W1212 00:37:58.560296  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:37:58.560302  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:37:58.560360  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:37:58.585008  530956 cri.go:89] found id: ""
	I1212 00:37:58.585022  530956 logs.go:282] 0 containers: []
	W1212 00:37:58.585029  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:37:58.585034  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:37:58.585092  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:37:58.610668  530956 cri.go:89] found id: ""
	I1212 00:37:58.610704  530956 logs.go:282] 0 containers: []
	W1212 00:37:58.610712  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:37:58.610717  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:37:58.610791  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:37:58.633946  530956 cri.go:89] found id: ""
	I1212 00:37:58.633960  530956 logs.go:282] 0 containers: []
	W1212 00:37:58.633967  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:37:58.633974  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:37:58.633984  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:37:58.702859  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:37:58.702878  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:37:58.730459  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:37:58.730475  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:37:58.799001  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:37:58.799020  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:37:58.813707  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:37:58.813724  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:37:58.880292  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:37:58.871863   12866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:58.872482   12866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:58.874082   12866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:58.874638   12866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:58.876520   12866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:37:58.871863   12866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:58.872482   12866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:58.874082   12866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:58.874638   12866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:58.876520   12866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:38:01.380529  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:38:01.390377  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:38:01.390440  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:38:01.414742  530956 cri.go:89] found id: ""
	I1212 00:38:01.414755  530956 logs.go:282] 0 containers: []
	W1212 00:38:01.414763  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:38:01.414769  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:38:01.414848  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:38:01.440014  530956 cri.go:89] found id: ""
	I1212 00:38:01.440028  530956 logs.go:282] 0 containers: []
	W1212 00:38:01.440035  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:38:01.440040  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:38:01.440100  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:38:01.469919  530956 cri.go:89] found id: ""
	I1212 00:38:01.469947  530956 logs.go:282] 0 containers: []
	W1212 00:38:01.469955  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:38:01.469963  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:38:01.470025  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:38:01.502102  530956 cri.go:89] found id: ""
	I1212 00:38:01.502116  530956 logs.go:282] 0 containers: []
	W1212 00:38:01.502123  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:38:01.502128  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:38:01.502185  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:38:01.550477  530956 cri.go:89] found id: ""
	I1212 00:38:01.550497  530956 logs.go:282] 0 containers: []
	W1212 00:38:01.550504  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:38:01.550509  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:38:01.550572  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:38:01.575848  530956 cri.go:89] found id: ""
	I1212 00:38:01.575861  530956 logs.go:282] 0 containers: []
	W1212 00:38:01.575868  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:38:01.575874  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:38:01.575933  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:38:01.601329  530956 cri.go:89] found id: ""
	I1212 00:38:01.601342  530956 logs.go:282] 0 containers: []
	W1212 00:38:01.601350  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:38:01.601358  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:38:01.601369  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:38:01.617336  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:38:01.617351  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:38:01.681650  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:38:01.672976   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:01.673795   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:01.675461   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:01.675793   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:01.677308   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:38:01.672976   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:01.673795   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:01.675461   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:01.675793   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:01.677308   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:38:01.681659  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:38:01.681669  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:38:01.753959  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:38:01.753987  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:38:01.784884  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:38:01.784901  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:38:04.352224  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:38:04.362582  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:38:04.362651  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:38:04.387422  530956 cri.go:89] found id: ""
	I1212 00:38:04.387436  530956 logs.go:282] 0 containers: []
	W1212 00:38:04.387443  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:38:04.387448  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:38:04.387515  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:38:04.416278  530956 cri.go:89] found id: ""
	I1212 00:38:04.416292  530956 logs.go:282] 0 containers: []
	W1212 00:38:04.416298  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:38:04.416304  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:38:04.416360  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:38:04.445370  530956 cri.go:89] found id: ""
	I1212 00:38:04.445384  530956 logs.go:282] 0 containers: []
	W1212 00:38:04.445391  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:38:04.445397  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:38:04.445455  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:38:04.482755  530956 cri.go:89] found id: ""
	I1212 00:38:04.482768  530956 logs.go:282] 0 containers: []
	W1212 00:38:04.482783  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:38:04.482789  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:38:04.482857  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:38:04.509091  530956 cri.go:89] found id: ""
	I1212 00:38:04.509105  530956 logs.go:282] 0 containers: []
	W1212 00:38:04.509120  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:38:04.509126  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:38:04.509194  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:38:04.539958  530956 cri.go:89] found id: ""
	I1212 00:38:04.539980  530956 logs.go:282] 0 containers: []
	W1212 00:38:04.539987  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:38:04.539995  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:38:04.540053  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:38:04.565072  530956 cri.go:89] found id: ""
	I1212 00:38:04.565085  530956 logs.go:282] 0 containers: []
	W1212 00:38:04.565092  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:38:04.565100  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:38:04.565110  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:38:04.632823  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:38:04.632844  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:38:04.659747  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:38:04.659763  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:38:04.726963  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:38:04.726980  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:38:04.742446  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:38:04.742462  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:38:04.811712  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:38:04.802381   13076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:04.803275   13076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:04.805003   13076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:04.805618   13076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:04.807784   13076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:38:04.802381   13076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:04.803275   13076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:04.805003   13076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:04.805618   13076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:04.807784   13076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:38:07.313373  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:38:07.323395  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:38:07.323461  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:38:07.349092  530956 cri.go:89] found id: ""
	I1212 00:38:07.349106  530956 logs.go:282] 0 containers: []
	W1212 00:38:07.349114  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:38:07.349119  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:38:07.349178  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:38:07.374733  530956 cri.go:89] found id: ""
	I1212 00:38:07.374747  530956 logs.go:282] 0 containers: []
	W1212 00:38:07.374754  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:38:07.374759  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:38:07.374826  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:38:07.399425  530956 cri.go:89] found id: ""
	I1212 00:38:07.399439  530956 logs.go:282] 0 containers: []
	W1212 00:38:07.399446  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:38:07.399450  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:38:07.399509  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:38:07.423784  530956 cri.go:89] found id: ""
	I1212 00:38:07.423798  530956 logs.go:282] 0 containers: []
	W1212 00:38:07.423805  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:38:07.423809  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:38:07.423866  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:38:07.449601  530956 cri.go:89] found id: ""
	I1212 00:38:07.449615  530956 logs.go:282] 0 containers: []
	W1212 00:38:07.449622  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:38:07.449627  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:38:07.449687  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:38:07.483778  530956 cri.go:89] found id: ""
	I1212 00:38:07.483793  530956 logs.go:282] 0 containers: []
	W1212 00:38:07.483800  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:38:07.483805  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:38:07.483863  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:38:07.514105  530956 cri.go:89] found id: ""
	I1212 00:38:07.514118  530956 logs.go:282] 0 containers: []
	W1212 00:38:07.514126  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:38:07.514135  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:38:07.514144  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:38:07.584461  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:38:07.584483  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:38:07.599076  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:38:07.599092  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:38:07.662502  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:38:07.654255   13169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:07.655086   13169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:07.656662   13169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:07.656958   13169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:07.658502   13169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:38:07.654255   13169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:07.655086   13169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:07.656662   13169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:07.656958   13169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:07.658502   13169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:38:07.662512  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:38:07.662524  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:38:07.730514  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:38:07.730532  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:38:10.261580  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:38:10.271806  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:38:10.271866  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:38:10.301488  530956 cri.go:89] found id: ""
	I1212 00:38:10.301509  530956 logs.go:282] 0 containers: []
	W1212 00:38:10.301517  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:38:10.301522  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:38:10.301586  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:38:10.328569  530956 cri.go:89] found id: ""
	I1212 00:38:10.328582  530956 logs.go:282] 0 containers: []
	W1212 00:38:10.328589  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:38:10.328594  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:38:10.328651  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:38:10.352390  530956 cri.go:89] found id: ""
	I1212 00:38:10.352404  530956 logs.go:282] 0 containers: []
	W1212 00:38:10.352411  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:38:10.352416  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:38:10.352476  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:38:10.376595  530956 cri.go:89] found id: ""
	I1212 00:38:10.376608  530956 logs.go:282] 0 containers: []
	W1212 00:38:10.376615  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:38:10.376620  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:38:10.376676  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:38:10.401114  530956 cri.go:89] found id: ""
	I1212 00:38:10.401129  530956 logs.go:282] 0 containers: []
	W1212 00:38:10.401136  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:38:10.401141  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:38:10.401202  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:38:10.426633  530956 cri.go:89] found id: ""
	I1212 00:38:10.426647  530956 logs.go:282] 0 containers: []
	W1212 00:38:10.426654  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:38:10.426659  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:38:10.426740  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:38:10.452233  530956 cri.go:89] found id: ""
	I1212 00:38:10.452246  530956 logs.go:282] 0 containers: []
	W1212 00:38:10.452254  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:38:10.452262  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:38:10.452272  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:38:10.521036  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:38:10.521055  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:38:10.535759  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:38:10.535774  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:38:10.601793  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:38:10.593515   13275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:10.594074   13275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:10.595582   13275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:10.596077   13275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:10.597523   13275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:38:10.593515   13275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:10.594074   13275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:10.595582   13275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:10.596077   13275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:10.597523   13275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:38:10.601803  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:38:10.601813  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:38:10.672541  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:38:10.672560  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:38:13.203975  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:38:13.213736  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:38:13.213796  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:38:13.238219  530956 cri.go:89] found id: ""
	I1212 00:38:13.238234  530956 logs.go:282] 0 containers: []
	W1212 00:38:13.238241  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:38:13.238246  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:38:13.238303  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:38:13.262428  530956 cri.go:89] found id: ""
	I1212 00:38:13.262441  530956 logs.go:282] 0 containers: []
	W1212 00:38:13.262449  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:38:13.262454  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:38:13.262518  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:38:13.287118  530956 cri.go:89] found id: ""
	I1212 00:38:13.287132  530956 logs.go:282] 0 containers: []
	W1212 00:38:13.287139  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:38:13.287144  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:38:13.287201  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:38:13.316471  530956 cri.go:89] found id: ""
	I1212 00:38:13.316485  530956 logs.go:282] 0 containers: []
	W1212 00:38:13.316492  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:38:13.316497  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:38:13.316554  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:38:13.340630  530956 cri.go:89] found id: ""
	I1212 00:38:13.340644  530956 logs.go:282] 0 containers: []
	W1212 00:38:13.340651  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:38:13.340656  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:38:13.340719  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:38:13.365167  530956 cri.go:89] found id: ""
	I1212 00:38:13.365180  530956 logs.go:282] 0 containers: []
	W1212 00:38:13.365187  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:38:13.365192  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:38:13.365249  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:38:13.393786  530956 cri.go:89] found id: ""
	I1212 00:38:13.393800  530956 logs.go:282] 0 containers: []
	W1212 00:38:13.393806  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:38:13.393813  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:38:13.393824  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:38:13.460497  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:38:13.460517  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:38:13.484321  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:38:13.484350  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:38:13.564959  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:38:13.556484   13380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:13.557156   13380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:13.558914   13380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:13.559521   13380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:13.561122   13380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:38:13.556484   13380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:13.557156   13380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:13.558914   13380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:13.559521   13380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:13.561122   13380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:38:13.564970  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:38:13.564991  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:38:13.633622  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:38:13.633641  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:38:16.165859  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:38:16.179076  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:38:16.179137  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:38:16.204832  530956 cri.go:89] found id: ""
	I1212 00:38:16.204846  530956 logs.go:282] 0 containers: []
	W1212 00:38:16.204853  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:38:16.204858  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:38:16.204929  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:38:16.230899  530956 cri.go:89] found id: ""
	I1212 00:38:16.230912  530956 logs.go:282] 0 containers: []
	W1212 00:38:16.230920  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:38:16.230924  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:38:16.230985  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:38:16.260492  530956 cri.go:89] found id: ""
	I1212 00:38:16.260505  530956 logs.go:282] 0 containers: []
	W1212 00:38:16.260513  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:38:16.260518  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:38:16.260582  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:38:16.285639  530956 cri.go:89] found id: ""
	I1212 00:38:16.285652  530956 logs.go:282] 0 containers: []
	W1212 00:38:16.285660  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:38:16.285665  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:38:16.285724  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:38:16.311240  530956 cri.go:89] found id: ""
	I1212 00:38:16.311253  530956 logs.go:282] 0 containers: []
	W1212 00:38:16.311261  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:38:16.311266  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:38:16.311331  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:38:16.337039  530956 cri.go:89] found id: ""
	I1212 00:38:16.337053  530956 logs.go:282] 0 containers: []
	W1212 00:38:16.337060  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:38:16.337065  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:38:16.337132  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:38:16.363033  530956 cri.go:89] found id: ""
	I1212 00:38:16.363047  530956 logs.go:282] 0 containers: []
	W1212 00:38:16.363053  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:38:16.363061  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:38:16.363072  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:38:16.393154  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:38:16.393171  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:38:16.460499  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:38:16.460516  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:38:16.475666  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:38:16.475681  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:38:16.550358  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:38:16.542066   13498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:16.542789   13498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:16.544568   13498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:16.545193   13498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:16.546806   13498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:38:16.542066   13498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:16.542789   13498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:16.544568   13498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:16.545193   13498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:16.546806   13498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:38:16.550367  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:38:16.550378  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:38:19.117450  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:38:19.129437  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:38:19.129500  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:38:19.153970  530956 cri.go:89] found id: ""
	I1212 00:38:19.153983  530956 logs.go:282] 0 containers: []
	W1212 00:38:19.153990  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:38:19.153995  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:38:19.154052  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:38:19.179294  530956 cri.go:89] found id: ""
	I1212 00:38:19.179307  530956 logs.go:282] 0 containers: []
	W1212 00:38:19.179314  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:38:19.179319  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:38:19.179381  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:38:19.205071  530956 cri.go:89] found id: ""
	I1212 00:38:19.205091  530956 logs.go:282] 0 containers: []
	W1212 00:38:19.205098  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:38:19.205103  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:38:19.205168  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:38:19.230084  530956 cri.go:89] found id: ""
	I1212 00:38:19.230098  530956 logs.go:282] 0 containers: []
	W1212 00:38:19.230111  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:38:19.230118  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:38:19.230181  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:38:19.255464  530956 cri.go:89] found id: ""
	I1212 00:38:19.255477  530956 logs.go:282] 0 containers: []
	W1212 00:38:19.255485  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:38:19.255490  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:38:19.255549  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:38:19.285389  530956 cri.go:89] found id: ""
	I1212 00:38:19.285402  530956 logs.go:282] 0 containers: []
	W1212 00:38:19.285409  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:38:19.285415  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:38:19.285472  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:38:19.312947  530956 cri.go:89] found id: ""
	I1212 00:38:19.312960  530956 logs.go:282] 0 containers: []
	W1212 00:38:19.312967  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:38:19.312975  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:38:19.312985  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:38:19.350894  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:38:19.350911  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:38:19.417923  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:38:19.417945  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:38:19.432429  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:38:19.432445  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:38:19.505932  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:38:19.498121   13597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:19.498890   13597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:19.500411   13597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:19.500702   13597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:19.502118   13597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:38:19.498121   13597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:19.498890   13597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:19.500411   13597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:19.500702   13597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:19.502118   13597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:38:19.505942  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:38:19.505964  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:38:22.083196  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:38:22.093637  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:38:22.093699  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:38:22.118550  530956 cri.go:89] found id: ""
	I1212 00:38:22.118565  530956 logs.go:282] 0 containers: []
	W1212 00:38:22.118572  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:38:22.118578  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:38:22.118636  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:38:22.145134  530956 cri.go:89] found id: ""
	I1212 00:38:22.145147  530956 logs.go:282] 0 containers: []
	W1212 00:38:22.145155  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:38:22.145159  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:38:22.145217  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:38:22.170293  530956 cri.go:89] found id: ""
	I1212 00:38:22.170306  530956 logs.go:282] 0 containers: []
	W1212 00:38:22.170313  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:38:22.170318  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:38:22.170386  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:38:22.197536  530956 cri.go:89] found id: ""
	I1212 00:38:22.197550  530956 logs.go:282] 0 containers: []
	W1212 00:38:22.197571  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:38:22.197576  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:38:22.197642  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:38:22.222476  530956 cri.go:89] found id: ""
	I1212 00:38:22.222490  530956 logs.go:282] 0 containers: []
	W1212 00:38:22.222497  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:38:22.222502  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:38:22.222560  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:38:22.247759  530956 cri.go:89] found id: ""
	I1212 00:38:22.247779  530956 logs.go:282] 0 containers: []
	W1212 00:38:22.247792  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:38:22.247797  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:38:22.247865  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:38:22.278000  530956 cri.go:89] found id: ""
	I1212 00:38:22.278022  530956 logs.go:282] 0 containers: []
	W1212 00:38:22.278030  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:38:22.278037  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:38:22.278047  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:38:22.306112  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:38:22.306127  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:38:22.377647  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:38:22.377675  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:38:22.394490  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:38:22.394506  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:38:22.462988  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:38:22.454404   13702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:22.455058   13702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:22.456732   13702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:22.457164   13702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:22.458875   13702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:38:22.454404   13702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:22.455058   13702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:22.456732   13702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:22.457164   13702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:22.458875   13702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:38:22.462999  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:38:22.463010  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:38:25.044675  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:38:25.054532  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:38:25.054592  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:38:25.080041  530956 cri.go:89] found id: ""
	I1212 00:38:25.080055  530956 logs.go:282] 0 containers: []
	W1212 00:38:25.080062  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:38:25.080068  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:38:25.080129  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:38:25.105941  530956 cri.go:89] found id: ""
	I1212 00:38:25.105957  530956 logs.go:282] 0 containers: []
	W1212 00:38:25.105965  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:38:25.105971  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:38:25.106038  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:38:25.136063  530956 cri.go:89] found id: ""
	I1212 00:38:25.136078  530956 logs.go:282] 0 containers: []
	W1212 00:38:25.136086  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:38:25.136096  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:38:25.136159  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:38:25.161125  530956 cri.go:89] found id: ""
	I1212 00:38:25.161140  530956 logs.go:282] 0 containers: []
	W1212 00:38:25.161147  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:38:25.161153  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:38:25.161212  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:38:25.187318  530956 cri.go:89] found id: ""
	I1212 00:38:25.187333  530956 logs.go:282] 0 containers: []
	W1212 00:38:25.187340  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:38:25.187345  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:38:25.187407  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:38:25.213505  530956 cri.go:89] found id: ""
	I1212 00:38:25.213519  530956 logs.go:282] 0 containers: []
	W1212 00:38:25.213528  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:38:25.213533  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:38:25.213593  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:38:25.238804  530956 cri.go:89] found id: ""
	I1212 00:38:25.238818  530956 logs.go:282] 0 containers: []
	W1212 00:38:25.238825  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:38:25.238833  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:38:25.238845  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:38:25.253570  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:38:25.253586  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:38:25.319774  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:38:25.310440   13795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:25.311167   13795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:25.312774   13795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:25.313270   13795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:25.315248   13795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:38:25.310440   13795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:25.311167   13795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:25.312774   13795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:25.313270   13795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:25.315248   13795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:38:25.319800  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:38:25.319811  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:38:25.392356  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:38:25.392375  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:38:25.422668  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:38:25.422706  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:38:27.990024  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:38:28.003363  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:38:28.003444  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:38:28.033003  530956 cri.go:89] found id: ""
	I1212 00:38:28.033017  530956 logs.go:282] 0 containers: []
	W1212 00:38:28.033024  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:38:28.033029  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:38:28.033090  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:38:28.059854  530956 cri.go:89] found id: ""
	I1212 00:38:28.059869  530956 logs.go:282] 0 containers: []
	W1212 00:38:28.059876  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:38:28.059881  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:38:28.059946  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:38:28.085318  530956 cri.go:89] found id: ""
	I1212 00:38:28.085332  530956 logs.go:282] 0 containers: []
	W1212 00:38:28.085339  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:38:28.085349  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:38:28.085408  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:38:28.111377  530956 cri.go:89] found id: ""
	I1212 00:38:28.111390  530956 logs.go:282] 0 containers: []
	W1212 00:38:28.111397  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:38:28.111403  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:38:28.111464  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:38:28.140880  530956 cri.go:89] found id: ""
	I1212 00:38:28.140894  530956 logs.go:282] 0 containers: []
	W1212 00:38:28.140910  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:38:28.140915  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:38:28.140985  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:38:28.166928  530956 cri.go:89] found id: ""
	I1212 00:38:28.166943  530956 logs.go:282] 0 containers: []
	W1212 00:38:28.166950  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:38:28.166955  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:38:28.167013  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:38:28.193116  530956 cri.go:89] found id: ""
	I1212 00:38:28.193129  530956 logs.go:282] 0 containers: []
	W1212 00:38:28.193136  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:38:28.193144  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:38:28.193157  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:38:28.207536  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:38:28.207551  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:38:28.273869  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:38:28.265632   13898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:28.266161   13898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:28.267761   13898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:28.268328   13898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:28.269940   13898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:38:28.265632   13898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:28.266161   13898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:28.267761   13898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:28.268328   13898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:28.269940   13898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:38:28.273878  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:38:28.273888  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:38:28.341616  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:38:28.341634  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:38:28.370270  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:38:28.370286  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:38:30.938812  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:38:30.948944  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:38:30.949000  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:38:30.977305  530956 cri.go:89] found id: ""
	I1212 00:38:30.977320  530956 logs.go:282] 0 containers: []
	W1212 00:38:30.977327  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:38:30.977333  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:38:30.977393  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:38:31.004773  530956 cri.go:89] found id: ""
	I1212 00:38:31.004793  530956 logs.go:282] 0 containers: []
	W1212 00:38:31.004802  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:38:31.004807  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:38:31.004878  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:38:31.034217  530956 cri.go:89] found id: ""
	I1212 00:38:31.034231  530956 logs.go:282] 0 containers: []
	W1212 00:38:31.034238  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:38:31.034243  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:38:31.034299  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:38:31.059299  530956 cri.go:89] found id: ""
	I1212 00:38:31.059313  530956 logs.go:282] 0 containers: []
	W1212 00:38:31.059320  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:38:31.059325  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:38:31.059389  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:38:31.085777  530956 cri.go:89] found id: ""
	I1212 00:38:31.085794  530956 logs.go:282] 0 containers: []
	W1212 00:38:31.085801  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:38:31.085806  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:38:31.085870  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:38:31.113432  530956 cri.go:89] found id: ""
	I1212 00:38:31.113445  530956 logs.go:282] 0 containers: []
	W1212 00:38:31.113453  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:38:31.113458  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:38:31.113517  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:38:31.140290  530956 cri.go:89] found id: ""
	I1212 00:38:31.140303  530956 logs.go:282] 0 containers: []
	W1212 00:38:31.140310  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:38:31.140318  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:38:31.140329  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:38:31.170079  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:38:31.170095  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:38:31.237344  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:38:31.237366  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:38:31.252705  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:38:31.252722  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:38:31.314201  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:38:31.305835   14019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:31.306554   14019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:31.308300   14019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:31.308814   14019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:31.310412   14019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:38:31.305835   14019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:31.306554   14019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:31.308300   14019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:31.308814   14019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:31.310412   14019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:38:31.314211  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:38:31.314222  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:38:33.887992  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:38:33.897911  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:38:33.897978  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:38:33.922473  530956 cri.go:89] found id: ""
	I1212 00:38:33.922487  530956 logs.go:282] 0 containers: []
	W1212 00:38:33.922494  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:38:33.922499  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:38:33.922556  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:38:33.947695  530956 cri.go:89] found id: ""
	I1212 00:38:33.947709  530956 logs.go:282] 0 containers: []
	W1212 00:38:33.947716  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:38:33.947720  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:38:33.947779  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:38:33.975167  530956 cri.go:89] found id: ""
	I1212 00:38:33.975181  530956 logs.go:282] 0 containers: []
	W1212 00:38:33.975188  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:38:33.975194  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:38:33.975256  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:38:33.999707  530956 cri.go:89] found id: ""
	I1212 00:38:33.999722  530956 logs.go:282] 0 containers: []
	W1212 00:38:33.999731  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:38:33.999736  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:38:33.999806  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:38:34.028202  530956 cri.go:89] found id: ""
	I1212 00:38:34.028216  530956 logs.go:282] 0 containers: []
	W1212 00:38:34.028224  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:38:34.028229  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:38:34.028289  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:38:34.053144  530956 cri.go:89] found id: ""
	I1212 00:38:34.053158  530956 logs.go:282] 0 containers: []
	W1212 00:38:34.053169  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:38:34.053175  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:38:34.053239  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:38:34.080035  530956 cri.go:89] found id: ""
	I1212 00:38:34.080050  530956 logs.go:282] 0 containers: []
	W1212 00:38:34.080058  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:38:34.080066  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:38:34.080076  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:38:34.146175  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:38:34.146192  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:38:34.160652  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:38:34.160668  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:38:34.223173  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:38:34.215210   14114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:34.216058   14114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:34.217521   14114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:34.217956   14114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:34.219415   14114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:38:34.215210   14114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:34.216058   14114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:34.217521   14114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:34.217956   14114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:34.219415   14114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:38:34.223184  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:38:34.223194  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:38:34.292571  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:38:34.292590  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:38:36.820393  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:38:36.830345  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:38:36.830406  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:38:36.854187  530956 cri.go:89] found id: ""
	I1212 00:38:36.854201  530956 logs.go:282] 0 containers: []
	W1212 00:38:36.854208  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:38:36.854213  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:38:36.854268  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:38:36.882747  530956 cri.go:89] found id: ""
	I1212 00:38:36.882767  530956 logs.go:282] 0 containers: []
	W1212 00:38:36.882774  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:38:36.882779  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:38:36.882836  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:38:36.909295  530956 cri.go:89] found id: ""
	I1212 00:38:36.909310  530956 logs.go:282] 0 containers: []
	W1212 00:38:36.909317  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:38:36.909321  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:38:36.909380  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:38:36.939718  530956 cri.go:89] found id: ""
	I1212 00:38:36.939732  530956 logs.go:282] 0 containers: []
	W1212 00:38:36.939739  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:38:36.939745  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:38:36.939805  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:38:36.985049  530956 cri.go:89] found id: ""
	I1212 00:38:36.985063  530956 logs.go:282] 0 containers: []
	W1212 00:38:36.985070  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:38:36.985075  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:38:36.985135  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:38:37.018069  530956 cri.go:89] found id: ""
	I1212 00:38:37.018092  530956 logs.go:282] 0 containers: []
	W1212 00:38:37.018101  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:38:37.018107  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:38:37.018197  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:38:37.045321  530956 cri.go:89] found id: ""
	I1212 00:38:37.045335  530956 logs.go:282] 0 containers: []
	W1212 00:38:37.045342  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:38:37.045349  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:38:37.045366  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:38:37.110695  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:38:37.110716  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:38:37.125484  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:38:37.125500  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:38:37.191768  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:38:37.183160   14219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:37.184307   14219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:37.184933   14219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:37.186530   14219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:37.186898   14219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:38:37.183160   14219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:37.184307   14219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:37.184933   14219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:37.186530   14219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:37.186898   14219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:38:37.191778  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:38:37.191789  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:38:37.258979  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:38:37.258998  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:38:39.789133  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:38:39.799919  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:38:39.799985  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:38:39.825459  530956 cri.go:89] found id: ""
	I1212 00:38:39.825473  530956 logs.go:282] 0 containers: []
	W1212 00:38:39.825481  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:38:39.825487  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:38:39.825550  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:38:39.853725  530956 cri.go:89] found id: ""
	I1212 00:38:39.853741  530956 logs.go:282] 0 containers: []
	W1212 00:38:39.853750  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:38:39.853757  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:38:39.853833  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:38:39.879329  530956 cri.go:89] found id: ""
	I1212 00:38:39.879343  530956 logs.go:282] 0 containers: []
	W1212 00:38:39.879350  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:38:39.879355  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:38:39.879417  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:38:39.910098  530956 cri.go:89] found id: ""
	I1212 00:38:39.910111  530956 logs.go:282] 0 containers: []
	W1212 00:38:39.910118  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:38:39.910124  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:38:39.910184  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:38:39.940693  530956 cri.go:89] found id: ""
	I1212 00:38:39.940707  530956 logs.go:282] 0 containers: []
	W1212 00:38:39.940714  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:38:39.940719  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:38:39.940779  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:38:39.967072  530956 cri.go:89] found id: ""
	I1212 00:38:39.967085  530956 logs.go:282] 0 containers: []
	W1212 00:38:39.967093  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:38:39.967099  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:38:39.967165  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:38:39.992659  530956 cri.go:89] found id: ""
	I1212 00:38:39.992672  530956 logs.go:282] 0 containers: []
	W1212 00:38:39.992680  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:38:39.992687  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:38:39.992697  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:38:40.113165  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:38:40.113185  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:38:40.130134  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:38:40.130150  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:38:40.200442  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:38:40.191596   14326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:40.192392   14326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:40.194267   14326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:40.194628   14326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:40.196363   14326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:38:40.191596   14326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:40.192392   14326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:40.194267   14326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:40.194628   14326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:40.196363   14326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:38:40.200453  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:38:40.200463  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:38:40.271707  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:38:40.271728  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:38:42.801953  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:38:42.811892  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:38:42.811958  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:38:42.841306  530956 cri.go:89] found id: ""
	I1212 00:38:42.841320  530956 logs.go:282] 0 containers: []
	W1212 00:38:42.841328  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:38:42.841334  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:38:42.841395  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:38:42.869294  530956 cri.go:89] found id: ""
	I1212 00:38:42.869308  530956 logs.go:282] 0 containers: []
	W1212 00:38:42.869314  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:38:42.869319  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:38:42.869381  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:38:42.898367  530956 cri.go:89] found id: ""
	I1212 00:38:42.898381  530956 logs.go:282] 0 containers: []
	W1212 00:38:42.898388  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:38:42.898393  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:38:42.898454  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:38:42.925039  530956 cri.go:89] found id: ""
	I1212 00:38:42.925052  530956 logs.go:282] 0 containers: []
	W1212 00:38:42.925059  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:38:42.925065  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:38:42.925125  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:38:42.955313  530956 cri.go:89] found id: ""
	I1212 00:38:42.955327  530956 logs.go:282] 0 containers: []
	W1212 00:38:42.955334  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:38:42.955339  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:38:42.955404  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:38:42.979722  530956 cri.go:89] found id: ""
	I1212 00:38:42.979735  530956 logs.go:282] 0 containers: []
	W1212 00:38:42.979742  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:38:42.979747  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:38:42.979808  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:38:43.027955  530956 cri.go:89] found id: ""
	I1212 00:38:43.027969  530956 logs.go:282] 0 containers: []
	W1212 00:38:43.027976  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:38:43.027983  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:38:43.027996  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:38:43.043222  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:38:43.043240  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:38:43.111269  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:38:43.102010   14428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:43.103597   14428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:43.104461   14428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:43.105967   14428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:43.106269   14428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:38:43.102010   14428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:43.103597   14428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:43.104461   14428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:43.105967   14428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:43.106269   14428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:38:43.111321  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:38:43.111331  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:38:43.177977  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:38:43.177997  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:38:43.206880  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:38:43.206895  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:38:45.775312  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:38:45.785672  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:38:45.785736  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:38:45.811375  530956 cri.go:89] found id: ""
	I1212 00:38:45.811389  530956 logs.go:282] 0 containers: []
	W1212 00:38:45.811396  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:38:45.811400  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:38:45.811459  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:38:45.836941  530956 cri.go:89] found id: ""
	I1212 00:38:45.836956  530956 logs.go:282] 0 containers: []
	W1212 00:38:45.836963  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:38:45.836968  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:38:45.837031  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:38:45.863375  530956 cri.go:89] found id: ""
	I1212 00:38:45.863389  530956 logs.go:282] 0 containers: []
	W1212 00:38:45.863396  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:38:45.863402  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:38:45.863461  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:38:45.888628  530956 cri.go:89] found id: ""
	I1212 00:38:45.888641  530956 logs.go:282] 0 containers: []
	W1212 00:38:45.888648  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:38:45.888654  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:38:45.888712  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:38:45.917199  530956 cri.go:89] found id: ""
	I1212 00:38:45.917213  530956 logs.go:282] 0 containers: []
	W1212 00:38:45.917221  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:38:45.917226  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:38:45.917289  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:38:45.944008  530956 cri.go:89] found id: ""
	I1212 00:38:45.944022  530956 logs.go:282] 0 containers: []
	W1212 00:38:45.944029  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:38:45.944034  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:38:45.944093  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:38:45.968971  530956 cri.go:89] found id: ""
	I1212 00:38:45.968984  530956 logs.go:282] 0 containers: []
	W1212 00:38:45.968992  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:38:45.969000  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:38:45.969010  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:38:46.034356  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:38:46.034375  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:38:46.048756  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:38:46.048771  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:38:46.115073  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:38:46.106286   14535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:46.107154   14535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:46.108746   14535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:46.109165   14535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:46.110638   14535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:38:46.106286   14535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:46.107154   14535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:46.108746   14535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:46.109165   14535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:46.110638   14535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:38:46.115096  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:38:46.115107  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:38:46.182387  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:38:46.182407  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:38:48.712482  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:38:48.722635  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:38:48.722715  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:38:48.752202  530956 cri.go:89] found id: ""
	I1212 00:38:48.752215  530956 logs.go:282] 0 containers: []
	W1212 00:38:48.752222  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:38:48.752227  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:38:48.752287  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:38:48.779084  530956 cri.go:89] found id: ""
	I1212 00:38:48.779097  530956 logs.go:282] 0 containers: []
	W1212 00:38:48.779105  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:38:48.779110  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:38:48.779165  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:38:48.803352  530956 cri.go:89] found id: ""
	I1212 00:38:48.803366  530956 logs.go:282] 0 containers: []
	W1212 00:38:48.803375  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:38:48.803380  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:38:48.803441  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:38:48.829635  530956 cri.go:89] found id: ""
	I1212 00:38:48.829649  530956 logs.go:282] 0 containers: []
	W1212 00:38:48.829656  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:38:48.829661  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:38:48.829720  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:38:48.854311  530956 cri.go:89] found id: ""
	I1212 00:38:48.854324  530956 logs.go:282] 0 containers: []
	W1212 00:38:48.854332  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:38:48.854337  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:38:48.854394  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:38:48.879369  530956 cri.go:89] found id: ""
	I1212 00:38:48.879383  530956 logs.go:282] 0 containers: []
	W1212 00:38:48.879390  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:38:48.879396  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:38:48.879456  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:38:48.908110  530956 cri.go:89] found id: ""
	I1212 00:38:48.908124  530956 logs.go:282] 0 containers: []
	W1212 00:38:48.908131  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:38:48.908138  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:38:48.908151  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:38:48.972035  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:38:48.972053  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:38:48.986646  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:38:48.986668  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:38:49.053589  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:38:49.045696   14637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:49.046251   14637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:49.047754   14637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:49.048291   14637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:49.049804   14637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:38:49.045696   14637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:49.046251   14637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:49.047754   14637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:49.048291   14637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:49.049804   14637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:38:49.053599  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:38:49.053608  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:38:49.123212  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:38:49.123236  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:38:51.651584  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:38:51.662032  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:38:51.662096  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:38:51.687559  530956 cri.go:89] found id: ""
	I1212 00:38:51.687573  530956 logs.go:282] 0 containers: []
	W1212 00:38:51.687580  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:38:51.687586  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:38:51.687655  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:38:51.713801  530956 cri.go:89] found id: ""
	I1212 00:38:51.713828  530956 logs.go:282] 0 containers: []
	W1212 00:38:51.713835  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:38:51.713840  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:38:51.713903  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:38:51.748993  530956 cri.go:89] found id: ""
	I1212 00:38:51.749006  530956 logs.go:282] 0 containers: []
	W1212 00:38:51.749028  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:38:51.749034  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:38:51.749091  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:38:51.777108  530956 cri.go:89] found id: ""
	I1212 00:38:51.777122  530956 logs.go:282] 0 containers: []
	W1212 00:38:51.777129  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:38:51.777135  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:38:51.777200  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:38:51.805174  530956 cri.go:89] found id: ""
	I1212 00:38:51.805188  530956 logs.go:282] 0 containers: []
	W1212 00:38:51.805195  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:38:51.805201  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:38:51.805266  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:38:51.830660  530956 cri.go:89] found id: ""
	I1212 00:38:51.830674  530956 logs.go:282] 0 containers: []
	W1212 00:38:51.830701  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:38:51.830706  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:38:51.830778  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:38:51.855989  530956 cri.go:89] found id: ""
	I1212 00:38:51.856003  530956 logs.go:282] 0 containers: []
	W1212 00:38:51.856017  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:38:51.856024  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:38:51.856035  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:38:51.887241  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:38:51.887257  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:38:51.953055  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:38:51.953075  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:38:51.969638  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:38:51.969660  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:38:52.045683  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:38:52.037116   14756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:52.037541   14756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:52.039334   14756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:52.039776   14756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:52.041452   14756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:38:52.037116   14756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:52.037541   14756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:52.039334   14756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:52.039776   14756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:52.041452   14756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:38:52.045694  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:38:52.045705  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:38:54.617323  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:38:54.627443  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:38:54.627502  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:38:54.651505  530956 cri.go:89] found id: ""
	I1212 00:38:54.651519  530956 logs.go:282] 0 containers: []
	W1212 00:38:54.651526  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:38:54.651532  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:38:54.651589  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:38:54.675935  530956 cri.go:89] found id: ""
	I1212 00:38:54.675961  530956 logs.go:282] 0 containers: []
	W1212 00:38:54.675968  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:38:54.675973  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:38:54.676042  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:38:54.701954  530956 cri.go:89] found id: ""
	I1212 00:38:54.701970  530956 logs.go:282] 0 containers: []
	W1212 00:38:54.701979  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:38:54.701986  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:38:54.702056  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:38:54.733636  530956 cri.go:89] found id: ""
	I1212 00:38:54.733657  530956 logs.go:282] 0 containers: []
	W1212 00:38:54.733666  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:38:54.733671  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:38:54.733742  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:38:54.761858  530956 cri.go:89] found id: ""
	I1212 00:38:54.761885  530956 logs.go:282] 0 containers: []
	W1212 00:38:54.761892  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:38:54.761897  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:38:54.761965  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:38:54.798397  530956 cri.go:89] found id: ""
	I1212 00:38:54.798411  530956 logs.go:282] 0 containers: []
	W1212 00:38:54.798431  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:38:54.798436  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:38:54.798502  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:38:54.823810  530956 cri.go:89] found id: ""
	I1212 00:38:54.823824  530956 logs.go:282] 0 containers: []
	W1212 00:38:54.823831  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:38:54.823840  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:38:54.823850  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:38:54.891230  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:38:54.891249  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:38:54.907075  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:38:54.907092  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:38:54.979081  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:38:54.970178   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:54.971009   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:54.971760   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:54.973599   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:54.973900   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:38:54.970178   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:54.971009   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:54.971760   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:54.973599   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:54.973900   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:38:54.979091  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:38:54.979103  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:38:55.048465  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:38:55.048486  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:38:57.579400  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:38:57.590372  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:38:57.590435  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:38:57.618082  530956 cri.go:89] found id: ""
	I1212 00:38:57.618096  530956 logs.go:282] 0 containers: []
	W1212 00:38:57.618103  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:38:57.618108  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:38:57.618169  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:38:57.644801  530956 cri.go:89] found id: ""
	I1212 00:38:57.644815  530956 logs.go:282] 0 containers: []
	W1212 00:38:57.644822  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:38:57.644827  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:38:57.644886  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:38:57.670018  530956 cri.go:89] found id: ""
	I1212 00:38:57.670032  530956 logs.go:282] 0 containers: []
	W1212 00:38:57.670045  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:38:57.670050  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:38:57.670111  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:38:57.695026  530956 cri.go:89] found id: ""
	I1212 00:38:57.695040  530956 logs.go:282] 0 containers: []
	W1212 00:38:57.695047  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:38:57.695052  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:38:57.695116  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:38:57.726077  530956 cri.go:89] found id: ""
	I1212 00:38:57.726091  530956 logs.go:282] 0 containers: []
	W1212 00:38:57.726098  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:38:57.726103  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:38:57.726182  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:38:57.766280  530956 cri.go:89] found id: ""
	I1212 00:38:57.766295  530956 logs.go:282] 0 containers: []
	W1212 00:38:57.766302  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:38:57.766308  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:38:57.766366  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:38:57.794888  530956 cri.go:89] found id: ""
	I1212 00:38:57.794902  530956 logs.go:282] 0 containers: []
	W1212 00:38:57.794909  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:38:57.794917  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:38:57.794931  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:38:57.861092  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:38:57.861111  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:38:57.876214  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:38:57.876230  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:38:57.943746  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:38:57.933552   14955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:57.934412   14955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:57.936560   14955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:57.937552   14955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:57.938297   14955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:38:57.933552   14955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:57.934412   14955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:57.936560   14955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:57.937552   14955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:57.938297   14955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:38:57.943757  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:38:57.943767  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:38:58.013702  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:38:58.013722  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:39:00.543612  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:39:00.553735  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:39:00.553795  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:39:00.580386  530956 cri.go:89] found id: ""
	I1212 00:39:00.580400  530956 logs.go:282] 0 containers: []
	W1212 00:39:00.580407  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:39:00.580412  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:39:00.580471  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:39:00.608511  530956 cri.go:89] found id: ""
	I1212 00:39:00.608525  530956 logs.go:282] 0 containers: []
	W1212 00:39:00.608532  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:39:00.608537  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:39:00.608594  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:39:00.633613  530956 cri.go:89] found id: ""
	I1212 00:39:00.633627  530956 logs.go:282] 0 containers: []
	W1212 00:39:00.633634  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:39:00.633639  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:39:00.633696  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:39:00.658755  530956 cri.go:89] found id: ""
	I1212 00:39:00.658769  530956 logs.go:282] 0 containers: []
	W1212 00:39:00.658776  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:39:00.658782  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:39:00.658845  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:39:00.688160  530956 cri.go:89] found id: ""
	I1212 00:39:00.688174  530956 logs.go:282] 0 containers: []
	W1212 00:39:00.688181  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:39:00.688187  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:39:00.688246  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:39:00.714115  530956 cri.go:89] found id: ""
	I1212 00:39:00.714129  530956 logs.go:282] 0 containers: []
	W1212 00:39:00.714136  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:39:00.714142  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:39:00.714203  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:39:00.743594  530956 cri.go:89] found id: ""
	I1212 00:39:00.743607  530956 logs.go:282] 0 containers: []
	W1212 00:39:00.743614  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:39:00.743622  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:39:00.743632  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:39:00.825728  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:39:00.825750  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:39:00.840575  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:39:00.840590  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:39:00.904328  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:39:00.896372   15065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:00.896852   15065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:00.898503   15065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:00.898943   15065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:00.900532   15065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:39:00.896372   15065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:00.896852   15065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:00.898503   15065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:00.898943   15065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:00.900532   15065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:39:00.904339  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:39:00.904350  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:39:00.971157  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:39:00.971177  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:39:03.500568  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:39:03.510753  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:39:03.510824  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:39:03.535333  530956 cri.go:89] found id: ""
	I1212 00:39:03.535347  530956 logs.go:282] 0 containers: []
	W1212 00:39:03.535354  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:39:03.535359  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:39:03.535422  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:39:03.560575  530956 cri.go:89] found id: ""
	I1212 00:39:03.560589  530956 logs.go:282] 0 containers: []
	W1212 00:39:03.560597  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:39:03.560602  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:39:03.560659  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:39:03.589048  530956 cri.go:89] found id: ""
	I1212 00:39:03.589062  530956 logs.go:282] 0 containers: []
	W1212 00:39:03.589069  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:39:03.589075  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:39:03.589131  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:39:03.614812  530956 cri.go:89] found id: ""
	I1212 00:39:03.614826  530956 logs.go:282] 0 containers: []
	W1212 00:39:03.614834  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:39:03.614839  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:39:03.614908  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:39:03.641138  530956 cri.go:89] found id: ""
	I1212 00:39:03.641152  530956 logs.go:282] 0 containers: []
	W1212 00:39:03.641158  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:39:03.641164  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:39:03.641221  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:39:03.669855  530956 cri.go:89] found id: ""
	I1212 00:39:03.669869  530956 logs.go:282] 0 containers: []
	W1212 00:39:03.669876  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:39:03.669884  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:39:03.669943  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:39:03.694625  530956 cri.go:89] found id: ""
	I1212 00:39:03.694650  530956 logs.go:282] 0 containers: []
	W1212 00:39:03.694657  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:39:03.694665  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:39:03.694676  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:39:03.761872  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:39:03.761891  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:39:03.777581  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:39:03.777598  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:39:03.843774  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:39:03.835704   15173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:03.836242   15173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:03.838010   15173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:03.838382   15173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:03.839850   15173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:39:03.835704   15173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:03.836242   15173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:03.838010   15173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:03.838382   15173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:03.839850   15173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:39:03.843783  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:39:03.843793  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:39:03.914951  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:39:03.914977  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:39:06.443917  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:39:06.454370  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:39:06.454434  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:39:06.482109  530956 cri.go:89] found id: ""
	I1212 00:39:06.482123  530956 logs.go:282] 0 containers: []
	W1212 00:39:06.482131  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:39:06.482136  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:39:06.482199  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:39:06.509716  530956 cri.go:89] found id: ""
	I1212 00:39:06.509730  530956 logs.go:282] 0 containers: []
	W1212 00:39:06.509737  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:39:06.509742  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:39:06.509800  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:39:06.537521  530956 cri.go:89] found id: ""
	I1212 00:39:06.537535  530956 logs.go:282] 0 containers: []
	W1212 00:39:06.537542  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:39:06.537548  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:39:06.537606  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:39:06.562757  530956 cri.go:89] found id: ""
	I1212 00:39:06.562770  530956 logs.go:282] 0 containers: []
	W1212 00:39:06.562778  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:39:06.562783  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:39:06.562842  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:39:06.587417  530956 cri.go:89] found id: ""
	I1212 00:39:06.587431  530956 logs.go:282] 0 containers: []
	W1212 00:39:06.587439  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:39:06.587443  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:39:06.587507  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:39:06.612775  530956 cri.go:89] found id: ""
	I1212 00:39:06.612789  530956 logs.go:282] 0 containers: []
	W1212 00:39:06.612797  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:39:06.612804  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:39:06.612864  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:39:06.637360  530956 cri.go:89] found id: ""
	I1212 00:39:06.637374  530956 logs.go:282] 0 containers: []
	W1212 00:39:06.637382  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:39:06.637389  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:39:06.637400  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:39:06.651687  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:39:06.651703  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:39:06.714510  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:39:06.706351   15271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:06.706972   15271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:06.708728   15271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:06.709213   15271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:06.710650   15271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:39:06.706351   15271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:06.706972   15271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:06.708728   15271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:06.709213   15271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:06.710650   15271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:39:06.714521  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:39:06.714531  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:39:06.793242  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:39:06.793263  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:39:06.825153  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:39:06.825170  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:39:09.391589  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:39:09.401762  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:39:09.401823  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:39:09.426113  530956 cri.go:89] found id: ""
	I1212 00:39:09.426127  530956 logs.go:282] 0 containers: []
	W1212 00:39:09.426135  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:39:09.426139  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:39:09.426197  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:39:09.455495  530956 cri.go:89] found id: ""
	I1212 00:39:09.455509  530956 logs.go:282] 0 containers: []
	W1212 00:39:09.455522  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:39:09.455527  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:39:09.455586  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:39:09.484947  530956 cri.go:89] found id: ""
	I1212 00:39:09.484961  530956 logs.go:282] 0 containers: []
	W1212 00:39:09.484969  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:39:09.484975  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:39:09.485038  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:39:09.510850  530956 cri.go:89] found id: ""
	I1212 00:39:09.510865  530956 logs.go:282] 0 containers: []
	W1212 00:39:09.510873  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:39:09.510878  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:39:09.510936  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:39:09.536933  530956 cri.go:89] found id: ""
	I1212 00:39:09.536955  530956 logs.go:282] 0 containers: []
	W1212 00:39:09.536963  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:39:09.536968  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:39:09.537038  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:39:09.565308  530956 cri.go:89] found id: ""
	I1212 00:39:09.565321  530956 logs.go:282] 0 containers: []
	W1212 00:39:09.565328  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:39:09.565333  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:39:09.565391  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:39:09.596694  530956 cri.go:89] found id: ""
	I1212 00:39:09.596708  530956 logs.go:282] 0 containers: []
	W1212 00:39:09.596716  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:39:09.596724  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:39:09.596734  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:39:09.661768  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:39:09.661787  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:39:09.676496  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:39:09.676512  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:39:09.751036  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:39:09.740810   15379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:09.741527   15379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:09.743548   15379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:09.744409   15379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:09.746279   15379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:39:09.740810   15379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:09.741527   15379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:09.743548   15379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:09.744409   15379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:09.746279   15379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:39:09.751057  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:39:09.751069  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:39:09.831885  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:39:09.831905  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:39:12.361885  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:39:12.371912  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:39:12.371972  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:39:12.400852  530956 cri.go:89] found id: ""
	I1212 00:39:12.400867  530956 logs.go:282] 0 containers: []
	W1212 00:39:12.400874  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:39:12.400879  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:39:12.400939  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:39:12.426229  530956 cri.go:89] found id: ""
	I1212 00:39:12.426244  530956 logs.go:282] 0 containers: []
	W1212 00:39:12.426251  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:39:12.426256  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:39:12.426313  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:39:12.455450  530956 cri.go:89] found id: ""
	I1212 00:39:12.455465  530956 logs.go:282] 0 containers: []
	W1212 00:39:12.455472  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:39:12.455477  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:39:12.455542  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:39:12.480339  530956 cri.go:89] found id: ""
	I1212 00:39:12.480353  530956 logs.go:282] 0 containers: []
	W1212 00:39:12.480360  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:39:12.480365  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:39:12.480425  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:39:12.508098  530956 cri.go:89] found id: ""
	I1212 00:39:12.508112  530956 logs.go:282] 0 containers: []
	W1212 00:39:12.508119  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:39:12.508124  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:39:12.508185  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:39:12.534232  530956 cri.go:89] found id: ""
	I1212 00:39:12.534246  530956 logs.go:282] 0 containers: []
	W1212 00:39:12.534253  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:39:12.534259  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:39:12.534318  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:39:12.564030  530956 cri.go:89] found id: ""
	I1212 00:39:12.564045  530956 logs.go:282] 0 containers: []
	W1212 00:39:12.564053  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:39:12.564061  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:39:12.564072  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:39:12.578300  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:39:12.578315  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:39:12.645692  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:39:12.637241   15484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:12.637958   15484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:12.639484   15484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:12.639902   15484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:12.641511   15484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:39:12.637241   15484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:12.637958   15484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:12.639484   15484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:12.639902   15484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:12.641511   15484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:39:12.645702  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:39:12.645714  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:39:12.716817  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:39:12.716835  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:39:12.755607  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:39:12.755622  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:39:15.328461  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:39:15.338656  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:39:15.338747  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:39:15.368754  530956 cri.go:89] found id: ""
	I1212 00:39:15.368768  530956 logs.go:282] 0 containers: []
	W1212 00:39:15.368775  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:39:15.368780  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:39:15.368839  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:39:15.395430  530956 cri.go:89] found id: ""
	I1212 00:39:15.395444  530956 logs.go:282] 0 containers: []
	W1212 00:39:15.395451  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:39:15.395456  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:39:15.395522  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:39:15.420901  530956 cri.go:89] found id: ""
	I1212 00:39:15.420922  530956 logs.go:282] 0 containers: []
	W1212 00:39:15.420930  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:39:15.420935  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:39:15.420996  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:39:15.446341  530956 cri.go:89] found id: ""
	I1212 00:39:15.446355  530956 logs.go:282] 0 containers: []
	W1212 00:39:15.446362  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:39:15.446367  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:39:15.446425  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:39:15.472134  530956 cri.go:89] found id: ""
	I1212 00:39:15.472148  530956 logs.go:282] 0 containers: []
	W1212 00:39:15.472155  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:39:15.472160  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:39:15.472224  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:39:15.499707  530956 cri.go:89] found id: ""
	I1212 00:39:15.499721  530956 logs.go:282] 0 containers: []
	W1212 00:39:15.499729  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:39:15.499734  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:39:15.499803  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:39:15.525097  530956 cri.go:89] found id: ""
	I1212 00:39:15.525111  530956 logs.go:282] 0 containers: []
	W1212 00:39:15.525119  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:39:15.525126  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:39:15.525141  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:39:15.591570  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:39:15.591589  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:39:15.606307  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:39:15.606323  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:39:15.671615  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:39:15.663912   15590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:15.664737   15590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:15.666223   15590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:15.666722   15590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:15.668013   15590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:39:15.663912   15590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:15.664737   15590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:15.666223   15590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:15.666722   15590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:15.668013   15590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:39:15.671625  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:39:15.671640  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:39:15.740633  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:39:15.740680  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:39:18.284352  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:39:18.294497  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:39:18.294570  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:39:18.320150  530956 cri.go:89] found id: ""
	I1212 00:39:18.320164  530956 logs.go:282] 0 containers: []
	W1212 00:39:18.320173  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:39:18.320178  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:39:18.320236  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:39:18.346472  530956 cri.go:89] found id: ""
	I1212 00:39:18.346486  530956 logs.go:282] 0 containers: []
	W1212 00:39:18.346493  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:39:18.346498  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:39:18.346556  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:39:18.377328  530956 cri.go:89] found id: ""
	I1212 00:39:18.377342  530956 logs.go:282] 0 containers: []
	W1212 00:39:18.377349  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:39:18.377354  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:39:18.377411  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:39:18.402792  530956 cri.go:89] found id: ""
	I1212 00:39:18.402813  530956 logs.go:282] 0 containers: []
	W1212 00:39:18.402820  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:39:18.402826  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:39:18.402889  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:39:18.433183  530956 cri.go:89] found id: ""
	I1212 00:39:18.433198  530956 logs.go:282] 0 containers: []
	W1212 00:39:18.433205  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:39:18.433210  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:39:18.433272  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:39:18.458993  530956 cri.go:89] found id: ""
	I1212 00:39:18.459007  530956 logs.go:282] 0 containers: []
	W1212 00:39:18.459015  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:39:18.459020  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:39:18.459082  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:39:18.483237  530956 cri.go:89] found id: ""
	I1212 00:39:18.483251  530956 logs.go:282] 0 containers: []
	W1212 00:39:18.483258  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:39:18.483267  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:39:18.483276  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:39:18.549785  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:39:18.549803  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:39:18.564675  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:39:18.564692  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:39:18.635252  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:39:18.622293   15699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:18.627996   15699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:18.628725   15699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:18.629829   15699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:18.630310   15699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:39:18.622293   15699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:18.627996   15699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:18.628725   15699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:18.629829   15699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:18.630310   15699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:39:18.635261  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:39:18.635271  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:39:18.704032  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:39:18.704054  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:39:21.245504  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:39:21.256336  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:39:21.256398  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:39:21.282849  530956 cri.go:89] found id: ""
	I1212 00:39:21.282863  530956 logs.go:282] 0 containers: []
	W1212 00:39:21.282871  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:39:21.282878  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:39:21.282936  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:39:21.309330  530956 cri.go:89] found id: ""
	I1212 00:39:21.309344  530956 logs.go:282] 0 containers: []
	W1212 00:39:21.309351  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:39:21.309359  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:39:21.309419  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:39:21.338973  530956 cri.go:89] found id: ""
	I1212 00:39:21.338986  530956 logs.go:282] 0 containers: []
	W1212 00:39:21.338994  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:39:21.338999  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:39:21.339064  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:39:21.366261  530956 cri.go:89] found id: ""
	I1212 00:39:21.366275  530956 logs.go:282] 0 containers: []
	W1212 00:39:21.366282  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:39:21.366287  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:39:21.366346  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:39:21.393801  530956 cri.go:89] found id: ""
	I1212 00:39:21.393815  530956 logs.go:282] 0 containers: []
	W1212 00:39:21.393822  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:39:21.393827  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:39:21.393888  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:39:21.418339  530956 cri.go:89] found id: ""
	I1212 00:39:21.418353  530956 logs.go:282] 0 containers: []
	W1212 00:39:21.418360  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:39:21.418365  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:39:21.418425  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:39:21.443336  530956 cri.go:89] found id: ""
	I1212 00:39:21.443350  530956 logs.go:282] 0 containers: []
	W1212 00:39:21.443356  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:39:21.443364  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:39:21.443375  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:39:21.470973  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:39:21.470988  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:39:21.540182  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:39:21.540203  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:39:21.554835  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:39:21.554851  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:39:21.618440  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:39:21.609987   15813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:21.610798   15813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:21.612355   15813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:21.612867   15813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:21.614445   15813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:39:21.609987   15813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:21.610798   15813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:21.612355   15813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:21.612867   15813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:21.614445   15813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:39:21.618450  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:39:21.618460  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:39:24.186363  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:39:24.196446  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:39:24.196514  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:39:24.228176  530956 cri.go:89] found id: ""
	I1212 00:39:24.228189  530956 logs.go:282] 0 containers: []
	W1212 00:39:24.228196  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:39:24.228201  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:39:24.228263  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:39:24.252432  530956 cri.go:89] found id: ""
	I1212 00:39:24.252446  530956 logs.go:282] 0 containers: []
	W1212 00:39:24.252453  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:39:24.252458  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:39:24.252517  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:39:24.277088  530956 cri.go:89] found id: ""
	I1212 00:39:24.277102  530956 logs.go:282] 0 containers: []
	W1212 00:39:24.277109  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:39:24.277113  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:39:24.277172  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:39:24.301976  530956 cri.go:89] found id: ""
	I1212 00:39:24.301989  530956 logs.go:282] 0 containers: []
	W1212 00:39:24.301996  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:39:24.302001  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:39:24.302058  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:39:24.326771  530956 cri.go:89] found id: ""
	I1212 00:39:24.326785  530956 logs.go:282] 0 containers: []
	W1212 00:39:24.326792  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:39:24.326797  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:39:24.326858  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:39:24.352740  530956 cri.go:89] found id: ""
	I1212 00:39:24.352754  530956 logs.go:282] 0 containers: []
	W1212 00:39:24.352761  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:39:24.352766  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:39:24.352825  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:39:24.379469  530956 cri.go:89] found id: ""
	I1212 00:39:24.379483  530956 logs.go:282] 0 containers: []
	W1212 00:39:24.379490  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:39:24.379498  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:39:24.379508  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:39:24.407400  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:39:24.407417  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:39:24.473931  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:39:24.473951  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:39:24.488478  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:39:24.488494  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:39:24.552073  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:39:24.544053   15923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:24.544795   15923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:24.546281   15923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:24.546877   15923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:24.548351   15923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:39:24.544053   15923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:24.544795   15923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:24.546281   15923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:24.546877   15923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:24.548351   15923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:39:24.552083  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:39:24.552093  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:39:27.124323  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:39:27.134160  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:39:27.134218  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:39:27.161224  530956 cri.go:89] found id: ""
	I1212 00:39:27.161239  530956 logs.go:282] 0 containers: []
	W1212 00:39:27.161247  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:39:27.161253  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:39:27.161317  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:39:27.185561  530956 cri.go:89] found id: ""
	I1212 00:39:27.185575  530956 logs.go:282] 0 containers: []
	W1212 00:39:27.185582  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:39:27.185587  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:39:27.185647  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:39:27.212949  530956 cri.go:89] found id: ""
	I1212 00:39:27.212962  530956 logs.go:282] 0 containers: []
	W1212 00:39:27.212969  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:39:27.212974  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:39:27.213035  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:39:27.237907  530956 cri.go:89] found id: ""
	I1212 00:39:27.237921  530956 logs.go:282] 0 containers: []
	W1212 00:39:27.237928  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:39:27.237933  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:39:27.237991  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:39:27.264773  530956 cri.go:89] found id: ""
	I1212 00:39:27.264787  530956 logs.go:282] 0 containers: []
	W1212 00:39:27.264794  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:39:27.264799  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:39:27.264858  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:39:27.290448  530956 cri.go:89] found id: ""
	I1212 00:39:27.290462  530956 logs.go:282] 0 containers: []
	W1212 00:39:27.290469  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:39:27.290474  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:39:27.290531  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:39:27.315823  530956 cri.go:89] found id: ""
	I1212 00:39:27.315837  530956 logs.go:282] 0 containers: []
	W1212 00:39:27.315844  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:39:27.315852  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:39:27.315863  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:39:27.389757  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:39:27.389777  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:39:27.422043  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:39:27.422059  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:39:27.492490  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:39:27.492509  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:39:27.507777  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:39:27.507793  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:39:27.571981  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:39:27.563800   16033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:27.564533   16033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:27.566031   16033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:27.566607   16033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:27.568082   16033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:39:27.563800   16033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:27.564533   16033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:27.566031   16033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:27.566607   16033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:27.568082   16033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:39:30.074632  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:39:30.089373  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:39:30.089465  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:39:30.125906  530956 cri.go:89] found id: ""
	I1212 00:39:30.125923  530956 logs.go:282] 0 containers: []
	W1212 00:39:30.125931  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:39:30.125939  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:39:30.126019  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:39:30.159780  530956 cri.go:89] found id: ""
	I1212 00:39:30.159796  530956 logs.go:282] 0 containers: []
	W1212 00:39:30.159804  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:39:30.159810  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:39:30.159878  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:39:30.186451  530956 cri.go:89] found id: ""
	I1212 00:39:30.186466  530956 logs.go:282] 0 containers: []
	W1212 00:39:30.186473  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:39:30.186478  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:39:30.186541  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:39:30.212831  530956 cri.go:89] found id: ""
	I1212 00:39:30.212846  530956 logs.go:282] 0 containers: []
	W1212 00:39:30.212859  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:39:30.212864  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:39:30.212926  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:39:30.239897  530956 cri.go:89] found id: ""
	I1212 00:39:30.239912  530956 logs.go:282] 0 containers: []
	W1212 00:39:30.239919  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:39:30.239924  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:39:30.239987  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:39:30.265595  530956 cri.go:89] found id: ""
	I1212 00:39:30.265610  530956 logs.go:282] 0 containers: []
	W1212 00:39:30.265618  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:39:30.265623  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:39:30.265684  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:39:30.293057  530956 cri.go:89] found id: ""
	I1212 00:39:30.293072  530956 logs.go:282] 0 containers: []
	W1212 00:39:30.293079  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:39:30.293087  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:39:30.293098  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:39:30.360384  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:39:30.360403  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:39:30.375514  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:39:30.375533  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:39:30.445622  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:39:30.436678   16126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:30.437400   16126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:30.439405   16126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:30.440010   16126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:30.441699   16126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:39:30.436678   16126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:30.437400   16126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:30.439405   16126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:30.440010   16126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:30.441699   16126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:39:30.445632  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:39:30.445642  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:39:30.514984  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:39:30.515002  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:39:33.046486  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:39:33.057328  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:39:33.057389  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:39:33.082571  530956 cri.go:89] found id: ""
	I1212 00:39:33.082584  530956 logs.go:282] 0 containers: []
	W1212 00:39:33.082592  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:39:33.082597  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:39:33.082656  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:39:33.107156  530956 cri.go:89] found id: ""
	I1212 00:39:33.107169  530956 logs.go:282] 0 containers: []
	W1212 00:39:33.107176  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:39:33.107181  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:39:33.107242  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:39:33.132433  530956 cri.go:89] found id: ""
	I1212 00:39:33.132448  530956 logs.go:282] 0 containers: []
	W1212 00:39:33.132456  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:39:33.132460  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:39:33.132524  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:39:33.158141  530956 cri.go:89] found id: ""
	I1212 00:39:33.158155  530956 logs.go:282] 0 containers: []
	W1212 00:39:33.158162  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:39:33.158167  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:39:33.158229  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:39:33.185335  530956 cri.go:89] found id: ""
	I1212 00:39:33.185350  530956 logs.go:282] 0 containers: []
	W1212 00:39:33.185357  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:39:33.185362  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:39:33.185423  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:39:33.214702  530956 cri.go:89] found id: ""
	I1212 00:39:33.214716  530956 logs.go:282] 0 containers: []
	W1212 00:39:33.214731  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:39:33.214738  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:39:33.214798  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:39:33.239415  530956 cri.go:89] found id: ""
	I1212 00:39:33.239429  530956 logs.go:282] 0 containers: []
	W1212 00:39:33.239436  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:39:33.239444  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:39:33.239462  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:39:33.303881  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:39:33.303900  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:39:33.318306  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:39:33.318324  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:39:33.385940  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:39:33.376699   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:33.377337   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:33.379127   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:33.379736   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:33.381336   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:39:33.376699   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:33.377337   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:33.379127   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:33.379736   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:33.381336   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:39:33.385950  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:39:33.385961  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:39:33.453867  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:39:33.453884  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:39:35.983022  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:39:35.993721  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:39:35.993785  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:39:36.032639  530956 cri.go:89] found id: ""
	I1212 00:39:36.032654  530956 logs.go:282] 0 containers: []
	W1212 00:39:36.032662  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:39:36.032667  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:39:36.032737  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:39:36.069795  530956 cri.go:89] found id: ""
	I1212 00:39:36.069810  530956 logs.go:282] 0 containers: []
	W1212 00:39:36.069817  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:39:36.069822  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:39:36.069882  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:39:36.099096  530956 cri.go:89] found id: ""
	I1212 00:39:36.099111  530956 logs.go:282] 0 containers: []
	W1212 00:39:36.099118  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:39:36.099124  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:39:36.099184  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:39:36.128685  530956 cri.go:89] found id: ""
	I1212 00:39:36.128699  530956 logs.go:282] 0 containers: []
	W1212 00:39:36.128706  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:39:36.128711  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:39:36.128772  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:39:36.154641  530956 cri.go:89] found id: ""
	I1212 00:39:36.154654  530956 logs.go:282] 0 containers: []
	W1212 00:39:36.154662  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:39:36.154666  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:39:36.154762  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:39:36.179316  530956 cri.go:89] found id: ""
	I1212 00:39:36.179330  530956 logs.go:282] 0 containers: []
	W1212 00:39:36.179338  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:39:36.179343  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:39:36.179402  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:39:36.205036  530956 cri.go:89] found id: ""
	I1212 00:39:36.205050  530956 logs.go:282] 0 containers: []
	W1212 00:39:36.205057  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:39:36.205066  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:39:36.205079  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:39:36.271067  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:39:36.271086  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:39:36.285990  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:39:36.286006  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:39:36.350986  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:39:36.343284   16330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:36.343819   16330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:36.345314   16330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:36.345743   16330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:36.347205   16330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:39:36.343284   16330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:36.343819   16330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:36.345314   16330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:36.345743   16330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:36.347205   16330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:39:36.350996  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:39:36.351005  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:39:36.418783  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:39:36.418803  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:39:38.948706  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:39:38.958630  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:39:38.958705  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:39:38.988268  530956 cri.go:89] found id: ""
	I1212 00:39:38.988282  530956 logs.go:282] 0 containers: []
	W1212 00:39:38.988289  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:39:38.988294  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:39:38.988372  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:39:39.017066  530956 cri.go:89] found id: ""
	I1212 00:39:39.017088  530956 logs.go:282] 0 containers: []
	W1212 00:39:39.017095  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:39:39.017100  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:39:39.017158  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:39:39.044203  530956 cri.go:89] found id: ""
	I1212 00:39:39.044217  530956 logs.go:282] 0 containers: []
	W1212 00:39:39.044223  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:39:39.044232  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:39:39.044293  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:39:39.073574  530956 cri.go:89] found id: ""
	I1212 00:39:39.073588  530956 logs.go:282] 0 containers: []
	W1212 00:39:39.073595  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:39:39.073600  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:39:39.073658  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:39:39.098254  530956 cri.go:89] found id: ""
	I1212 00:39:39.098267  530956 logs.go:282] 0 containers: []
	W1212 00:39:39.098274  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:39:39.098279  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:39:39.098338  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:39:39.122552  530956 cri.go:89] found id: ""
	I1212 00:39:39.122566  530956 logs.go:282] 0 containers: []
	W1212 00:39:39.122573  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:39:39.122578  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:39:39.122641  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:39:39.149933  530956 cri.go:89] found id: ""
	I1212 00:39:39.149947  530956 logs.go:282] 0 containers: []
	W1212 00:39:39.149954  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:39:39.149961  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:39:39.149972  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:39:39.164970  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:39:39.164986  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:39:39.228249  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:39:39.219299   16431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:39.219833   16431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:39.221740   16431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:39.222278   16431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:39.223944   16431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:39:39.219299   16431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:39.219833   16431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:39.221740   16431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:39.222278   16431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:39.223944   16431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:39:39.228259  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:39:39.228272  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:39:39.295712  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:39:39.295731  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:39:39.326861  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:39:39.326879  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:39:41.894749  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:39:41.904730  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:39:41.904791  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:39:41.929481  530956 cri.go:89] found id: ""
	I1212 00:39:41.929494  530956 logs.go:282] 0 containers: []
	W1212 00:39:41.929501  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:39:41.929506  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:39:41.929564  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:39:41.956371  530956 cri.go:89] found id: ""
	I1212 00:39:41.956385  530956 logs.go:282] 0 containers: []
	W1212 00:39:41.956392  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:39:41.956397  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:39:41.956453  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:39:41.998298  530956 cri.go:89] found id: ""
	I1212 00:39:41.998313  530956 logs.go:282] 0 containers: []
	W1212 00:39:41.998327  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:39:41.998332  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:39:41.998394  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:39:42.039528  530956 cri.go:89] found id: ""
	I1212 00:39:42.039542  530956 logs.go:282] 0 containers: []
	W1212 00:39:42.039549  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:39:42.039554  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:39:42.039617  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:39:42.071895  530956 cri.go:89] found id: ""
	I1212 00:39:42.071909  530956 logs.go:282] 0 containers: []
	W1212 00:39:42.071918  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:39:42.071923  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:39:42.071999  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:39:42.104807  530956 cri.go:89] found id: ""
	I1212 00:39:42.104823  530956 logs.go:282] 0 containers: []
	W1212 00:39:42.104831  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:39:42.104837  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:39:42.104914  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:39:42.139871  530956 cri.go:89] found id: ""
	I1212 00:39:42.139886  530956 logs.go:282] 0 containers: []
	W1212 00:39:42.139894  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:39:42.139903  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:39:42.139917  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:39:42.221872  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:39:42.211579   16534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:42.212551   16534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:42.214428   16534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:42.215573   16534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:42.216630   16534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:39:42.211579   16534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:42.212551   16534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:42.214428   16534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:42.215573   16534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:42.216630   16534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:39:42.221883  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:39:42.221894  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:39:42.294247  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:39:42.294267  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:39:42.327229  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:39:42.327245  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:39:42.396289  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:39:42.396308  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:39:44.911333  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:39:44.921559  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:39:44.921618  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:39:44.947811  530956 cri.go:89] found id: ""
	I1212 00:39:44.947825  530956 logs.go:282] 0 containers: []
	W1212 00:39:44.947832  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:39:44.947837  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:39:44.947898  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:39:44.974488  530956 cri.go:89] found id: ""
	I1212 00:39:44.974502  530956 logs.go:282] 0 containers: []
	W1212 00:39:44.974509  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:39:44.974514  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:39:44.974578  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:39:45.062335  530956 cri.go:89] found id: ""
	I1212 00:39:45.062350  530956 logs.go:282] 0 containers: []
	W1212 00:39:45.062358  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:39:45.062363  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:39:45.062431  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:39:45.115594  530956 cri.go:89] found id: ""
	I1212 00:39:45.115611  530956 logs.go:282] 0 containers: []
	W1212 00:39:45.115621  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:39:45.115627  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:39:45.115695  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:39:45.157432  530956 cri.go:89] found id: ""
	I1212 00:39:45.157449  530956 logs.go:282] 0 containers: []
	W1212 00:39:45.157457  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:39:45.157463  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:39:45.157542  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:39:45.199222  530956 cri.go:89] found id: ""
	I1212 00:39:45.199237  530956 logs.go:282] 0 containers: []
	W1212 00:39:45.199247  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:39:45.199252  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:39:45.199327  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:39:45.277211  530956 cri.go:89] found id: ""
	I1212 00:39:45.277239  530956 logs.go:282] 0 containers: []
	W1212 00:39:45.277248  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:39:45.277256  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:39:45.277272  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:39:45.354665  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:39:45.354742  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:39:45.370015  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:39:45.370032  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:39:45.437294  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:39:45.428349   16647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:45.429025   16647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:45.430763   16647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:45.431362   16647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:45.433211   16647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:39:45.428349   16647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:45.429025   16647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:45.430763   16647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:45.431362   16647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:45.433211   16647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:39:45.437306  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:39:45.437317  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:39:45.506731  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:39:45.506752  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:39:48.035477  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:39:48.045681  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:39:48.045741  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:39:48.076045  530956 cri.go:89] found id: ""
	I1212 00:39:48.076059  530956 logs.go:282] 0 containers: []
	W1212 00:39:48.076066  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:39:48.076072  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:39:48.076135  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:39:48.110061  530956 cri.go:89] found id: ""
	I1212 00:39:48.110074  530956 logs.go:282] 0 containers: []
	W1212 00:39:48.110082  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:39:48.110087  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:39:48.110146  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:39:48.134924  530956 cri.go:89] found id: ""
	I1212 00:39:48.134939  530956 logs.go:282] 0 containers: []
	W1212 00:39:48.134946  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:39:48.134951  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:39:48.135014  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:39:48.160105  530956 cri.go:89] found id: ""
	I1212 00:39:48.160119  530956 logs.go:282] 0 containers: []
	W1212 00:39:48.160126  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:39:48.160131  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:39:48.160199  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:39:48.185148  530956 cri.go:89] found id: ""
	I1212 00:39:48.185162  530956 logs.go:282] 0 containers: []
	W1212 00:39:48.185169  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:39:48.185174  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:39:48.185236  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:39:48.210105  530956 cri.go:89] found id: ""
	I1212 00:39:48.210119  530956 logs.go:282] 0 containers: []
	W1212 00:39:48.210127  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:39:48.210132  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:39:48.210198  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:39:48.234723  530956 cri.go:89] found id: ""
	I1212 00:39:48.234736  530956 logs.go:282] 0 containers: []
	W1212 00:39:48.234743  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:39:48.234752  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:39:48.234762  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:39:48.264606  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:39:48.264624  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:39:48.333093  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:39:48.333111  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:39:48.348065  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:39:48.348080  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:39:48.410868  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:39:48.402563   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:48.403327   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:48.405002   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:48.405468   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:48.407068   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:39:48.402563   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:48.403327   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:48.405002   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:48.405468   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:48.407068   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:39:48.410879  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:39:48.410891  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:39:50.982598  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:39:50.995299  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:39:50.995361  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:39:51.032326  530956 cri.go:89] found id: ""
	I1212 00:39:51.032340  530956 logs.go:282] 0 containers: []
	W1212 00:39:51.032348  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:39:51.032353  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:39:51.032412  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:39:51.060416  530956 cri.go:89] found id: ""
	I1212 00:39:51.060435  530956 logs.go:282] 0 containers: []
	W1212 00:39:51.060444  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:39:51.060448  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:39:51.060525  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:39:51.087755  530956 cri.go:89] found id: ""
	I1212 00:39:51.087769  530956 logs.go:282] 0 containers: []
	W1212 00:39:51.087777  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:39:51.087783  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:39:51.087844  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:39:51.113932  530956 cri.go:89] found id: ""
	I1212 00:39:51.113946  530956 logs.go:282] 0 containers: []
	W1212 00:39:51.113954  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:39:51.113959  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:39:51.114017  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:39:51.141585  530956 cri.go:89] found id: ""
	I1212 00:39:51.141599  530956 logs.go:282] 0 containers: []
	W1212 00:39:51.141607  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:39:51.141612  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:39:51.141678  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:39:51.169491  530956 cri.go:89] found id: ""
	I1212 00:39:51.169506  530956 logs.go:282] 0 containers: []
	W1212 00:39:51.169513  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:39:51.169518  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:39:51.169577  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:39:51.195655  530956 cri.go:89] found id: ""
	I1212 00:39:51.195668  530956 logs.go:282] 0 containers: []
	W1212 00:39:51.195676  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:39:51.195684  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:39:51.195694  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:39:51.264764  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:39:51.264785  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:39:51.291612  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:39:51.291628  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:39:51.359746  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:39:51.359764  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:39:51.374319  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:39:51.374340  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:39:51.437078  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:39:51.428471   16867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:51.429034   16867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:51.430488   16867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:51.431070   16867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:51.432638   16867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:39:51.428471   16867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:51.429034   16867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:51.430488   16867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:51.431070   16867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:51.432638   16867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:39:53.938110  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:39:53.948663  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:39:53.948763  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:39:53.987477  530956 cri.go:89] found id: ""
	I1212 00:39:53.987490  530956 logs.go:282] 0 containers: []
	W1212 00:39:53.987497  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:39:53.987502  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:39:53.987565  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:39:54.026859  530956 cri.go:89] found id: ""
	I1212 00:39:54.026873  530956 logs.go:282] 0 containers: []
	W1212 00:39:54.026881  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:39:54.026897  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:39:54.026958  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:39:54.054638  530956 cri.go:89] found id: ""
	I1212 00:39:54.054652  530956 logs.go:282] 0 containers: []
	W1212 00:39:54.054659  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:39:54.054664  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:39:54.054820  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:39:54.080864  530956 cri.go:89] found id: ""
	I1212 00:39:54.080879  530956 logs.go:282] 0 containers: []
	W1212 00:39:54.080886  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:39:54.080891  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:39:54.080958  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:39:54.106972  530956 cri.go:89] found id: ""
	I1212 00:39:54.106986  530956 logs.go:282] 0 containers: []
	W1212 00:39:54.106993  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:39:54.106998  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:39:54.107056  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:39:54.131665  530956 cri.go:89] found id: ""
	I1212 00:39:54.131678  530956 logs.go:282] 0 containers: []
	W1212 00:39:54.131686  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:39:54.131692  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:39:54.131749  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:39:54.155857  530956 cri.go:89] found id: ""
	I1212 00:39:54.155870  530956 logs.go:282] 0 containers: []
	W1212 00:39:54.155877  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:39:54.155885  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:39:54.155895  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:39:54.225662  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:39:54.216735   16954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:54.217433   16954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:54.219268   16954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:54.219827   16954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:54.221703   16954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:39:54.216735   16954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:54.217433   16954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:54.219268   16954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:54.219827   16954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:54.221703   16954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:39:54.225675  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:39:54.225686  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:39:54.297964  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:39:54.297992  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:39:54.330016  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:39:54.330041  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:39:54.401820  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:39:54.401842  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:39:56.918391  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:39:56.929720  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:39:56.929780  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:39:56.955459  530956 cri.go:89] found id: ""
	I1212 00:39:56.955473  530956 logs.go:282] 0 containers: []
	W1212 00:39:56.955480  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:39:56.955485  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:39:56.955543  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:39:56.987918  530956 cri.go:89] found id: ""
	I1212 00:39:56.987932  530956 logs.go:282] 0 containers: []
	W1212 00:39:56.987939  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:39:56.987944  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:39:56.988002  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:39:57.020006  530956 cri.go:89] found id: ""
	I1212 00:39:57.020020  530956 logs.go:282] 0 containers: []
	W1212 00:39:57.020033  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:39:57.020038  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:39:57.020115  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:39:57.048442  530956 cri.go:89] found id: ""
	I1212 00:39:57.048467  530956 logs.go:282] 0 containers: []
	W1212 00:39:57.048475  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:39:57.048483  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:39:57.048552  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:39:57.074435  530956 cri.go:89] found id: ""
	I1212 00:39:57.074449  530956 logs.go:282] 0 containers: []
	W1212 00:39:57.074456  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:39:57.074461  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:39:57.074521  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:39:57.099293  530956 cri.go:89] found id: ""
	I1212 00:39:57.099307  530956 logs.go:282] 0 containers: []
	W1212 00:39:57.099315  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:39:57.099320  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:39:57.099379  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:39:57.125629  530956 cri.go:89] found id: ""
	I1212 00:39:57.125651  530956 logs.go:282] 0 containers: []
	W1212 00:39:57.125659  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:39:57.125666  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:39:57.125676  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:39:57.155351  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:39:57.155367  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:39:57.220025  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:39:57.220044  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:39:57.234981  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:39:57.235003  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:39:57.300835  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:39:57.292962   17075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:57.293535   17075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:57.295085   17075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:57.295551   17075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:57.297012   17075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:39:57.292962   17075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:57.293535   17075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:57.295085   17075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:57.295551   17075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:57.297012   17075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:39:57.300845  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:39:57.300856  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:39:59.869530  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:39:59.882048  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:39:59.882110  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:39:59.907681  530956 cri.go:89] found id: ""
	I1212 00:39:59.907696  530956 logs.go:282] 0 containers: []
	W1212 00:39:59.907703  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:39:59.907708  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:39:59.907775  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:39:59.932480  530956 cri.go:89] found id: ""
	I1212 00:39:59.932494  530956 logs.go:282] 0 containers: []
	W1212 00:39:59.932509  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:39:59.932515  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:39:59.932583  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:39:59.958173  530956 cri.go:89] found id: ""
	I1212 00:39:59.958188  530956 logs.go:282] 0 containers: []
	W1212 00:39:59.958195  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:39:59.958200  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:39:59.958261  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:39:59.990305  530956 cri.go:89] found id: ""
	I1212 00:39:59.990319  530956 logs.go:282] 0 containers: []
	W1212 00:39:59.990326  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:39:59.990331  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:39:59.990390  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:40:00.115674  530956 cri.go:89] found id: ""
	I1212 00:40:00.115690  530956 logs.go:282] 0 containers: []
	W1212 00:40:00.115699  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:40:00.115705  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:40:00.115778  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:40:00.211546  530956 cri.go:89] found id: ""
	I1212 00:40:00.211573  530956 logs.go:282] 0 containers: []
	W1212 00:40:00.211583  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:40:00.211589  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:40:00.211670  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:40:00.306176  530956 cri.go:89] found id: ""
	I1212 00:40:00.306192  530956 logs.go:282] 0 containers: []
	W1212 00:40:00.306200  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:40:00.306208  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:40:00.306220  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:40:00.433331  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:40:00.433360  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:40:00.458175  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:40:00.458193  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:40:00.603203  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:40:00.592976   17170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:40:00.593818   17170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:40:00.596110   17170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:40:00.596864   17170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:40:00.598662   17170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:40:00.592976   17170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:40:00.593818   17170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:40:00.596110   17170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:40:00.596864   17170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:40:00.598662   17170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:40:00.603213  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:40:00.603224  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:40:00.674062  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:40:00.674086  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:40:03.207059  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:40:03.217144  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:40:03.217203  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:40:03.242377  530956 cri.go:89] found id: ""
	I1212 00:40:03.242391  530956 logs.go:282] 0 containers: []
	W1212 00:40:03.242398  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:40:03.242403  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:40:03.242460  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:40:03.268604  530956 cri.go:89] found id: ""
	I1212 00:40:03.268618  530956 logs.go:282] 0 containers: []
	W1212 00:40:03.268625  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:40:03.268630  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:40:03.268691  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:40:03.293354  530956 cri.go:89] found id: ""
	I1212 00:40:03.293367  530956 logs.go:282] 0 containers: []
	W1212 00:40:03.293374  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:40:03.293379  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:40:03.293437  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:40:03.323082  530956 cri.go:89] found id: ""
	I1212 00:40:03.323095  530956 logs.go:282] 0 containers: []
	W1212 00:40:03.323102  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:40:03.323108  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:40:03.323165  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:40:03.348118  530956 cri.go:89] found id: ""
	I1212 00:40:03.348132  530956 logs.go:282] 0 containers: []
	W1212 00:40:03.348138  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:40:03.348144  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:40:03.348203  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:40:03.375333  530956 cri.go:89] found id: ""
	I1212 00:40:03.375346  530956 logs.go:282] 0 containers: []
	W1212 00:40:03.375353  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:40:03.375358  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:40:03.375418  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:40:03.401835  530956 cri.go:89] found id: ""
	I1212 00:40:03.401850  530956 logs.go:282] 0 containers: []
	W1212 00:40:03.401857  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:40:03.401864  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:40:03.401882  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:40:03.467887  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:40:03.459632   17271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:40:03.460370   17271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:40:03.461940   17271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:40:03.462285   17271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:40:03.463794   17271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:40:03.459632   17271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:40:03.460370   17271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:40:03.461940   17271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:40:03.462285   17271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:40:03.463794   17271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:40:03.467897  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:40:03.467907  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:40:03.536174  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:40:03.536194  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:40:03.564970  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:40:03.564985  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:40:03.632350  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:40:03.632369  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:40:06.147945  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:40:06.157971  530956 kubeadm.go:602] duration metric: took 4m2.720434125s to restartPrimaryControlPlane
	W1212 00:40:06.158027  530956 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1212 00:40:06.158103  530956 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1212 00:40:06.569482  530956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:40:06.582591  530956 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 00:40:06.590536  530956 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 00:40:06.590592  530956 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 00:40:06.598618  530956 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 00:40:06.598629  530956 kubeadm.go:158] found existing configuration files:
	
	I1212 00:40:06.598698  530956 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1212 00:40:06.606769  530956 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 00:40:06.606840  530956 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 00:40:06.614547  530956 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1212 00:40:06.622660  530956 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 00:40:06.622739  530956 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 00:40:06.630003  530956 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1212 00:40:06.638125  530956 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 00:40:06.638179  530956 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 00:40:06.645410  530956 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1212 00:40:06.652882  530956 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 00:40:06.652943  530956 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 00:40:06.660446  530956 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 00:40:06.700514  530956 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 00:40:06.700561  530956 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 00:40:06.776561  530956 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 00:40:06.776625  530956 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1212 00:40:06.776659  530956 kubeadm.go:319] OS: Linux
	I1212 00:40:06.776702  530956 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 00:40:06.776749  530956 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 00:40:06.776795  530956 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 00:40:06.776842  530956 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 00:40:06.776889  530956 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 00:40:06.776936  530956 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 00:40:06.776980  530956 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 00:40:06.777026  530956 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 00:40:06.777077  530956 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 00:40:06.848361  530956 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 00:40:06.848476  530956 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 00:40:06.848571  530956 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 00:40:06.858454  530956 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 00:40:06.861922  530956 out.go:252]   - Generating certificates and keys ...
	I1212 00:40:06.862039  530956 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 00:40:06.862113  530956 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 00:40:06.862184  530956 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 00:40:06.862240  530956 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1212 00:40:06.862305  530956 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 00:40:06.862362  530956 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1212 00:40:06.862420  530956 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1212 00:40:06.862477  530956 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1212 00:40:06.862546  530956 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 00:40:06.862613  530956 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 00:40:06.862665  530956 kubeadm.go:319] [certs] Using the existing "sa" key
	I1212 00:40:06.862736  530956 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 00:40:07.126544  530956 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 00:40:07.166854  530956 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 00:40:07.523509  530956 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 00:40:07.692785  530956 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 00:40:07.825726  530956 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 00:40:07.826395  530956 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 00:40:07.830778  530956 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 00:40:07.833963  530956 out.go:252]   - Booting up control plane ...
	I1212 00:40:07.834090  530956 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 00:40:07.834172  530956 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 00:40:07.835198  530956 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 00:40:07.850333  530956 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 00:40:07.850580  530956 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 00:40:07.857863  530956 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 00:40:07.858096  530956 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 00:40:07.858271  530956 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 00:40:07.986589  530956 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 00:40:07.986752  530956 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 00:44:07.988367  530956 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001882345s
	I1212 00:44:07.988392  530956 kubeadm.go:319] 
	I1212 00:44:07.988471  530956 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 00:44:07.988504  530956 kubeadm.go:319] 	- The kubelet is not running
	I1212 00:44:07.988626  530956 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 00:44:07.988630  530956 kubeadm.go:319] 
	I1212 00:44:07.988743  530956 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 00:44:07.988774  530956 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 00:44:07.988810  530956 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 00:44:07.988814  530956 kubeadm.go:319] 
	I1212 00:44:07.993727  530956 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1212 00:44:07.994213  530956 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1212 00:44:07.994355  530956 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 00:44:07.994630  530956 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1212 00:44:07.994638  530956 kubeadm.go:319] 
	I1212 00:44:07.994738  530956 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1212 00:44:07.994866  530956 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001882345s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1212 00:44:07.994955  530956 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1212 00:44:08.418732  530956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:44:08.431583  530956 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 00:44:08.431639  530956 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 00:44:08.439724  530956 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 00:44:08.439733  530956 kubeadm.go:158] found existing configuration files:
	
	I1212 00:44:08.439785  530956 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1212 00:44:08.447652  530956 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 00:44:08.447708  530956 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 00:44:08.454853  530956 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1212 00:44:08.462499  530956 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 00:44:08.462562  530956 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 00:44:08.470106  530956 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1212 00:44:08.477811  530956 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 00:44:08.477868  530956 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 00:44:08.485348  530956 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1212 00:44:08.493142  530956 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 00:44:08.493207  530956 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 00:44:08.501010  530956 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 00:44:08.619087  530956 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1212 00:44:08.619550  530956 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1212 00:44:08.685435  530956 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 00:48:10.247562  530956 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1212 00:48:10.247592  530956 kubeadm.go:319] 
	I1212 00:48:10.247688  530956 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1212 00:48:10.252292  530956 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 00:48:10.252346  530956 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 00:48:10.252445  530956 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 00:48:10.252500  530956 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1212 00:48:10.252533  530956 kubeadm.go:319] OS: Linux
	I1212 00:48:10.252577  530956 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 00:48:10.252624  530956 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 00:48:10.252670  530956 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 00:48:10.252716  530956 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 00:48:10.252768  530956 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 00:48:10.252816  530956 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 00:48:10.252859  530956 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 00:48:10.252906  530956 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 00:48:10.252951  530956 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 00:48:10.253023  530956 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 00:48:10.253117  530956 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 00:48:10.253205  530956 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 00:48:10.253277  530956 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 00:48:10.256411  530956 out.go:252]   - Generating certificates and keys ...
	I1212 00:48:10.256515  530956 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 00:48:10.256580  530956 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 00:48:10.256656  530956 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 00:48:10.256724  530956 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1212 00:48:10.256818  530956 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 00:48:10.256878  530956 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1212 00:48:10.256941  530956 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1212 00:48:10.257008  530956 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1212 00:48:10.257086  530956 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 00:48:10.257157  530956 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 00:48:10.257195  530956 kubeadm.go:319] [certs] Using the existing "sa" key
	I1212 00:48:10.257249  530956 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 00:48:10.257299  530956 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 00:48:10.257355  530956 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 00:48:10.257407  530956 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 00:48:10.257469  530956 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 00:48:10.257524  530956 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 00:48:10.257609  530956 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 00:48:10.257674  530956 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 00:48:10.260574  530956 out.go:252]   - Booting up control plane ...
	I1212 00:48:10.260690  530956 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 00:48:10.260801  530956 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 00:48:10.260876  530956 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 00:48:10.260981  530956 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 00:48:10.261102  530956 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 00:48:10.261235  530956 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 00:48:10.261332  530956 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 00:48:10.261377  530956 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 00:48:10.261506  530956 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 00:48:10.261614  530956 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 00:48:10.261707  530956 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000091689s
	I1212 00:48:10.261721  530956 kubeadm.go:319] 
	I1212 00:48:10.261778  530956 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 00:48:10.261809  530956 kubeadm.go:319] 	- The kubelet is not running
	I1212 00:48:10.261921  530956 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 00:48:10.261925  530956 kubeadm.go:319] 
	I1212 00:48:10.262045  530956 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 00:48:10.262083  530956 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 00:48:10.262112  530956 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 00:48:10.262133  530956 kubeadm.go:319] 
	I1212 00:48:10.262182  530956 kubeadm.go:403] duration metric: took 12m6.858628348s to StartCluster
	I1212 00:48:10.262232  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:48:10.262300  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:48:10.289138  530956 cri.go:89] found id: ""
	I1212 00:48:10.289156  530956 logs.go:282] 0 containers: []
	W1212 00:48:10.289163  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:48:10.289168  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:48:10.289230  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:48:10.317667  530956 cri.go:89] found id: ""
	I1212 00:48:10.317681  530956 logs.go:282] 0 containers: []
	W1212 00:48:10.317689  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:48:10.317694  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:48:10.317758  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:48:10.347070  530956 cri.go:89] found id: ""
	I1212 00:48:10.347083  530956 logs.go:282] 0 containers: []
	W1212 00:48:10.347091  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:48:10.347096  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:48:10.347155  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:48:10.373637  530956 cri.go:89] found id: ""
	I1212 00:48:10.373650  530956 logs.go:282] 0 containers: []
	W1212 00:48:10.373658  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:48:10.373663  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:48:10.373722  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:48:10.401060  530956 cri.go:89] found id: ""
	I1212 00:48:10.401074  530956 logs.go:282] 0 containers: []
	W1212 00:48:10.401081  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:48:10.401086  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:48:10.401146  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:48:10.426271  530956 cri.go:89] found id: ""
	I1212 00:48:10.426296  530956 logs.go:282] 0 containers: []
	W1212 00:48:10.426303  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:48:10.426309  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:48:10.426375  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:48:10.451340  530956 cri.go:89] found id: ""
	I1212 00:48:10.451354  530956 logs.go:282] 0 containers: []
	W1212 00:48:10.451361  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:48:10.451369  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:48:10.451379  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:48:10.526222  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:48:10.526241  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:48:10.557574  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:48:10.557591  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:48:10.627641  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:48:10.627659  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:48:10.642797  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:48:10.642812  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:48:10.704719  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:48:10.695841   21093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:48:10.696445   21093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:48:10.698250   21093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:48:10.698875   21093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:48:10.700463   21093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:48:10.695841   21093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:48:10.696445   21093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:48:10.698250   21093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:48:10.698875   21093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:48:10.700463   21093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	W1212 00:48:10.704732  530956 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000091689s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1212 00:48:10.704782  530956 out.go:285] * 
	W1212 00:48:10.704838  530956 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000091689s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 00:48:10.704854  530956 out.go:285] * 
	W1212 00:48:10.706992  530956 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 00:48:10.711878  530956 out.go:203] 
	W1212 00:48:10.714724  530956 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000091689s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 00:48:10.714773  530956 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1212 00:48:10.714793  530956 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1212 00:48:10.717973  530956 out.go:203] 
	
	
	==> CRI-O <==
	Dec 12 00:36:02 functional-035643 crio[9906]: time="2025-12-12T00:36:02.167671353Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 12 00:36:02 functional-035643 crio[9906]: time="2025-12-12T00:36:02.167708004Z" level=info msg="Starting seccomp notifier watcher"
	Dec 12 00:36:02 functional-035643 crio[9906]: time="2025-12-12T00:36:02.167752311Z" level=info msg="Create NRI interface"
	Dec 12 00:36:02 functional-035643 crio[9906]: time="2025-12-12T00:36:02.167872177Z" level=info msg="built-in NRI default validator is disabled"
	Dec 12 00:36:02 functional-035643 crio[9906]: time="2025-12-12T00:36:02.167882466Z" level=info msg="runtime interface created"
	Dec 12 00:36:02 functional-035643 crio[9906]: time="2025-12-12T00:36:02.167904357Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 12 00:36:02 functional-035643 crio[9906]: time="2025-12-12T00:36:02.167910363Z" level=info msg="runtime interface starting up..."
	Dec 12 00:36:02 functional-035643 crio[9906]: time="2025-12-12T00:36:02.167919183Z" level=info msg="starting plugins..."
	Dec 12 00:36:02 functional-035643 crio[9906]: time="2025-12-12T00:36:02.167936889Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 12 00:36:02 functional-035643 crio[9906]: time="2025-12-12T00:36:02.168004375Z" level=info msg="No systemd watchdog enabled"
	Dec 12 00:36:02 functional-035643 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 12 00:40:06 functional-035643 crio[9906]: time="2025-12-12T00:40:06.854632207Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=640da022-2edf-494c-a660-79e3ab919eba name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:40:06 functional-035643 crio[9906]: time="2025-12-12T00:40:06.855342483Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=673ecd0d-a1ac-45d5-bb90-3e1f04cdc90f name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:40:06 functional-035643 crio[9906]: time="2025-12-12T00:40:06.855810714Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=c57f39b7-fb58-4f67-bde4-1b55c2187b3f name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:40:06 functional-035643 crio[9906]: time="2025-12-12T00:40:06.856291532Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=a97cf7ab-fcf0-4971-8a79-d2c53b6e4ee5 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:40:06 functional-035643 crio[9906]: time="2025-12-12T00:40:06.856721905Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=01bb9bbf-51cf-478f-81f3-99ec7edffcf4 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:40:06 functional-035643 crio[9906]: time="2025-12-12T00:40:06.857120764Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=272d9706-6818-4f2e-bd33-95134bf8fb23 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:40:06 functional-035643 crio[9906]: time="2025-12-12T00:40:06.857524931Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=50b82acc-740c-444d-8ec5-a3c84ad4b6d2 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:44:08 functional-035643 crio[9906]: time="2025-12-12T00:44:08.688638675Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=4b143437-6a5c-4f02-b714-2d1bb8cb5a7a name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:44:08 functional-035643 crio[9906]: time="2025-12-12T00:44:08.689301235Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=54834de4-a19b-47fe-b478-123a3a9a03c9 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:44:08 functional-035643 crio[9906]: time="2025-12-12T00:44:08.689852011Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=a783bb1e-a84b-4d8e-b3d6-349f1b7407cf name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:44:08 functional-035643 crio[9906]: time="2025-12-12T00:44:08.690318153Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=d1830fa6-0c29-40cb-a67f-5512d68b4fbf name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:44:08 functional-035643 crio[9906]: time="2025-12-12T00:44:08.691052318Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=28af2ec8-e6ac-48d5-8255-6af4687f21e8 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:44:08 functional-035643 crio[9906]: time="2025-12-12T00:44:08.691575Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=e0b0d4f6-b48c-4765-bc0e-dcfd4d36d892 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:44:08 functional-035643 crio[9906]: time="2025-12-12T00:44:08.69204275Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=5eedd5b1-6dbb-4fbd-8ae6-426f470f128b name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:50:34.041093   23372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:50:34.041677   23372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:50:34.043160   23372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:50:34.043607   23372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:50:34.045036   23372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec11 23:45] hrtimer: interrupt took 13740716 ns
	[Dec12 00:10] kauditd_printk_skb: 8 callbacks suppressed
	[Dec12 00:11] overlayfs: idmapped layers are currently not supported
	[  +0.124336] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec12 00:17] overlayfs: idmapped layers are currently not supported
	[Dec12 00:18] overlayfs: idmapped layers are currently not supported
	[Dec12 00:35] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 00:50:34 up  3:32,  0 user,  load average: 1.26, 0.49, 0.52
	Linux functional-035643 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 12 00:50:31 functional-035643 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 00:50:32 functional-035643 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1150.
	Dec 12 00:50:32 functional-035643 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:50:32 functional-035643 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:50:32 functional-035643 kubelet[23248]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 00:50:32 functional-035643 kubelet[23248]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 00:50:32 functional-035643 kubelet[23248]: E1212 00:50:32.504257   23248 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 00:50:32 functional-035643 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 00:50:32 functional-035643 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 00:50:33 functional-035643 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1151.
	Dec 12 00:50:33 functional-035643 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:50:33 functional-035643 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:50:33 functional-035643 kubelet[23284]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 00:50:33 functional-035643 kubelet[23284]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 00:50:33 functional-035643 kubelet[23284]: E1212 00:50:33.171577   23284 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 00:50:33 functional-035643 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 00:50:33 functional-035643 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 00:50:33 functional-035643 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1152.
	Dec 12 00:50:33 functional-035643 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:50:33 functional-035643 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:50:34 functional-035643 kubelet[23365]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 00:50:34 functional-035643 kubelet[23365]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 00:50:34 functional-035643 kubelet[23365]: E1212 00:50:34.036242   23365 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 00:50:34 functional-035643 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 00:50:34 functional-035643 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-035643 -n functional-035643
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-035643 -n functional-035643: exit status 2 (380.895726ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-035643" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (3.21s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (2.44s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-035643 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1636: (dbg) Non-zero exit: kubectl --context functional-035643 create deployment hello-node-connect --image kicbase/echo-server: exit status 1 (54.622406ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1638: failed to create hello-node deployment with this command "kubectl --context functional-035643 create deployment hello-node-connect --image kicbase/echo-server": exit status 1.
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-035643 describe po hello-node-connect
functional_test.go:1612: (dbg) Non-zero exit: kubectl --context functional-035643 describe po hello-node-connect: exit status 1 (59.783705ms)

                                                
                                                
** stderr ** 
	E1212 00:50:19.899549  545265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1212 00:50:19.901125  545265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1212 00:50:19.902554  545265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1212 00:50:19.903980  545265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1212 00:50:19.905364  545265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1614: "kubectl --context functional-035643 describe po hello-node-connect" failed: exit status 1
functional_test.go:1616: hello-node pod describe:
functional_test.go:1618: (dbg) Run:  kubectl --context functional-035643 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-035643 logs -l app=hello-node-connect: exit status 1 (58.063994ms)

                                                
                                                
** stderr ** 
	E1212 00:50:19.958875  545269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1212 00:50:19.960383  545269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1212 00:50:19.961828  545269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1212 00:50:19.963250  545269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-035643 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-035643 describe svc hello-node-connect
functional_test.go:1624: (dbg) Non-zero exit: kubectl --context functional-035643 describe svc hello-node-connect: exit status 1 (62.749897ms)

                                                
                                                
** stderr ** 
	E1212 00:50:20.019960  545273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1212 00:50:20.021685  545273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1212 00:50:20.023400  545273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1212 00:50:20.025105  545273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1212 00:50:20.026706  545273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1626: "kubectl --context functional-035643 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1628: hello-node svc describe:
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-035643
helpers_test.go:244: (dbg) docker inspect functional-035643:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "02b8c8e636a5f9c9930bd279101be257c50eb00805c3c8fd0285e10206ff115a",
	        "Created": "2025-12-12T00:21:16.539894649Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 519641,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T00:21:16.600605162Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/02b8c8e636a5f9c9930bd279101be257c50eb00805c3c8fd0285e10206ff115a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/02b8c8e636a5f9c9930bd279101be257c50eb00805c3c8fd0285e10206ff115a/hostname",
	        "HostsPath": "/var/lib/docker/containers/02b8c8e636a5f9c9930bd279101be257c50eb00805c3c8fd0285e10206ff115a/hosts",
	        "LogPath": "/var/lib/docker/containers/02b8c8e636a5f9c9930bd279101be257c50eb00805c3c8fd0285e10206ff115a/02b8c8e636a5f9c9930bd279101be257c50eb00805c3c8fd0285e10206ff115a-json.log",
	        "Name": "/functional-035643",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-035643:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-035643",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "02b8c8e636a5f9c9930bd279101be257c50eb00805c3c8fd0285e10206ff115a",
	                "LowerDir": "/var/lib/docker/overlay2/fac4f7ad025123b29aaa4f718217dd9d72b542fc9949b8afc2ffc326afc38542-init/diff:/var/lib/docker/overlay2/312acdcca8c5c90ada236fa0dd866f841348e5b8485928af37d3628cccc20197/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fac4f7ad025123b29aaa4f718217dd9d72b542fc9949b8afc2ffc326afc38542/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fac4f7ad025123b29aaa4f718217dd9d72b542fc9949b8afc2ffc326afc38542/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fac4f7ad025123b29aaa4f718217dd9d72b542fc9949b8afc2ffc326afc38542/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-035643",
	                "Source": "/var/lib/docker/volumes/functional-035643/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-035643",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-035643",
	                "name.minikube.sigs.k8s.io": "functional-035643",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ede6a17442d6bf83b8f4c9f93f252345cec3d0406f82de2d6bd2cfd4713e2163",
	            "SandboxKey": "/var/run/docker/netns/ede6a17442d6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33183"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33184"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33187"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33185"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33186"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-035643": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "52:d5:12:89:ea:40",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ad01995b183fdebead6c725e2b942ae8ce2d3964b3552789fe5b50ee7e7239a3",
	                    "EndpointID": "d429a1cd0f840d042af4ad7ea0bda6067a342be7fb552083411004a3604b0124",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-035643",
	                        "02b8c8e636a5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-035643 -n functional-035643
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-035643 -n functional-035643: exit status 2 (312.906003ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-035643 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p functional-035643 logs -n 25: (1.010073054s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                             ARGS                                                                             │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cache   │ functional-035643 cache reload                                                                                                                               │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ ssh     │ functional-035643 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                      │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                             │ minikube          │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                                          │ minikube          │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ kubectl │ functional-035643 kubectl -- --context functional-035643 get pods                                                                                            │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │                     │
	│ start   │ -p functional-035643 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                                     │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │                     │
	│ config  │ functional-035643 config unset cpus                                                                                                                          │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:48 UTC │ 12 Dec 25 00:48 UTC │
	│ cp      │ functional-035643 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                                                           │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:48 UTC │ 12 Dec 25 00:48 UTC │
	│ config  │ functional-035643 config get cpus                                                                                                                            │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:48 UTC │                     │
	│ config  │ functional-035643 config set cpus 2                                                                                                                          │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:48 UTC │ 12 Dec 25 00:48 UTC │
	│ config  │ functional-035643 config get cpus                                                                                                                            │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:48 UTC │ 12 Dec 25 00:48 UTC │
	│ config  │ functional-035643 config unset cpus                                                                                                                          │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:48 UTC │ 12 Dec 25 00:48 UTC │
	│ ssh     │ functional-035643 ssh -n functional-035643 sudo cat /home/docker/cp-test.txt                                                                                 │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:48 UTC │ 12 Dec 25 00:48 UTC │
	│ config  │ functional-035643 config get cpus                                                                                                                            │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:48 UTC │                     │
	│ ssh     │ functional-035643 ssh echo hello                                                                                                                             │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:48 UTC │ 12 Dec 25 00:48 UTC │
	│ cp      │ functional-035643 cp functional-035643:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp1376821364/001/cp-test.txt │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:48 UTC │ 12 Dec 25 00:48 UTC │
	│ ssh     │ functional-035643 ssh cat /etc/hostname                                                                                                                      │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:48 UTC │ 12 Dec 25 00:48 UTC │
	│ ssh     │ functional-035643 ssh -n functional-035643 sudo cat /home/docker/cp-test.txt                                                                                 │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:48 UTC │ 12 Dec 25 00:48 UTC │
	│ tunnel  │ functional-035643 tunnel --alsologtostderr                                                                                                                   │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:48 UTC │                     │
	│ tunnel  │ functional-035643 tunnel --alsologtostderr                                                                                                                   │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:48 UTC │                     │
	│ cp      │ functional-035643 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                                                    │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:48 UTC │ 12 Dec 25 00:48 UTC │
	│ tunnel  │ functional-035643 tunnel --alsologtostderr                                                                                                                   │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:48 UTC │                     │
	│ ssh     │ functional-035643 ssh -n functional-035643 sudo cat /tmp/does/not/exist/cp-test.txt                                                                          │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:48 UTC │ 12 Dec 25 00:48 UTC │
	│ addons  │ functional-035643 addons list                                                                                                                                │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │ 12 Dec 25 00:50 UTC │
	│ addons  │ functional-035643 addons list -o json                                                                                                                        │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │ 12 Dec 25 00:50 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 00:35:58
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 00:35:58.676999  530956 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:35:58.677109  530956 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:35:58.677113  530956 out.go:374] Setting ErrFile to fd 2...
	I1212 00:35:58.677117  530956 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:35:58.677347  530956 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-487723/.minikube/bin
	I1212 00:35:58.677686  530956 out.go:368] Setting JSON to false
	I1212 00:35:58.678525  530956 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":11904,"bootTime":1765487855,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1212 00:35:58.678585  530956 start.go:143] virtualization:  
	I1212 00:35:58.682116  530956 out.go:179] * [functional-035643] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 00:35:58.686138  530956 out.go:179]   - MINIKUBE_LOCATION=22101
	I1212 00:35:58.686257  530956 notify.go:221] Checking for updates...
	I1212 00:35:58.691862  530956 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:35:58.694918  530956 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22101-487723/kubeconfig
	I1212 00:35:58.697806  530956 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-487723/.minikube
	I1212 00:35:58.700662  530956 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 00:35:58.703472  530956 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:35:58.706890  530956 config.go:182] Loaded profile config "functional-035643": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 00:35:58.706982  530956 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 00:35:58.735768  530956 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 00:35:58.735882  530956 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:35:58.786774  530956 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-12 00:35:58.777518712 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 00:35:58.786886  530956 docker.go:319] overlay module found
	I1212 00:35:58.790016  530956 out.go:179] * Using the docker driver based on existing profile
	I1212 00:35:58.792828  530956 start.go:309] selected driver: docker
	I1212 00:35:58.792840  530956 start.go:927] validating driver "docker" against &{Name:functional-035643 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-035643 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:35:58.792956  530956 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:35:58.793078  530956 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:35:58.848144  530956 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-12 00:35:58.839160729 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 00:35:58.848551  530956 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 00:35:58.848575  530956 cni.go:84] Creating CNI manager for ""
	I1212 00:35:58.848625  530956 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 00:35:58.848666  530956 start.go:353] cluster config:
	{Name:functional-035643 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-035643 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:35:58.851767  530956 out.go:179] * Starting "functional-035643" primary control-plane node in "functional-035643" cluster
	I1212 00:35:58.854549  530956 cache.go:134] Beginning downloading kic base image for docker with crio
	I1212 00:35:58.857426  530956 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1212 00:35:58.860284  530956 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1212 00:35:58.860323  530956 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22101-487723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1212 00:35:58.860332  530956 cache.go:65] Caching tarball of preloaded images
	I1212 00:35:58.860357  530956 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1212 00:35:58.860418  530956 preload.go:238] Found /home/jenkins/minikube-integration/22101-487723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1212 00:35:58.860426  530956 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1212 00:35:58.860536  530956 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/config.json ...
	I1212 00:35:58.879785  530956 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1212 00:35:58.879795  530956 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1212 00:35:58.879813  530956 cache.go:243] Successfully downloaded all kic artifacts
	I1212 00:35:58.879843  530956 start.go:360] acquireMachinesLock for functional-035643: {Name:mkb0cdc7d354412594dc63c0234fde00134e758d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 00:35:58.879904  530956 start.go:364] duration metric: took 45.603µs to acquireMachinesLock for "functional-035643"
	I1212 00:35:58.879924  530956 start.go:96] Skipping create...Using existing machine configuration
	I1212 00:35:58.879928  530956 fix.go:54] fixHost starting: 
	I1212 00:35:58.880192  530956 cli_runner.go:164] Run: docker container inspect functional-035643 --format={{.State.Status}}
	I1212 00:35:58.897119  530956 fix.go:112] recreateIfNeeded on functional-035643: state=Running err=<nil>
	W1212 00:35:58.897146  530956 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 00:35:58.900349  530956 out.go:252] * Updating the running docker "functional-035643" container ...
	I1212 00:35:58.900378  530956 machine.go:94] provisionDockerMachine start ...
	I1212 00:35:58.900465  530956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
	I1212 00:35:58.917663  530956 main.go:143] libmachine: Using SSH client type: native
	I1212 00:35:58.917980  530956 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1212 00:35:58.917985  530956 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 00:35:59.082110  530956 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-035643
	
	I1212 00:35:59.082124  530956 ubuntu.go:182] provisioning hostname "functional-035643"
	I1212 00:35:59.082187  530956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
	I1212 00:35:59.099710  530956 main.go:143] libmachine: Using SSH client type: native
	I1212 00:35:59.100009  530956 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1212 00:35:59.100017  530956 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-035643 && echo "functional-035643" | sudo tee /etc/hostname
	I1212 00:35:59.259555  530956 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-035643
	
	I1212 00:35:59.259640  530956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
	I1212 00:35:59.277248  530956 main.go:143] libmachine: Using SSH client type: native
	I1212 00:35:59.277556  530956 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1212 00:35:59.277570  530956 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-035643' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-035643/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-035643' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 00:35:59.427001  530956 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 00:35:59.427018  530956 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22101-487723/.minikube CaCertPath:/home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22101-487723/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22101-487723/.minikube}
	I1212 00:35:59.427041  530956 ubuntu.go:190] setting up certificates
	I1212 00:35:59.427057  530956 provision.go:84] configureAuth start
	I1212 00:35:59.427116  530956 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-035643
	I1212 00:35:59.444510  530956 provision.go:143] copyHostCerts
	I1212 00:35:59.444577  530956 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-487723/.minikube/ca.pem, removing ...
	I1212 00:35:59.444584  530956 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-487723/.minikube/ca.pem
	I1212 00:35:59.444656  530956 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22101-487723/.minikube/ca.pem (1078 bytes)
	I1212 00:35:59.444762  530956 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-487723/.minikube/cert.pem, removing ...
	I1212 00:35:59.444766  530956 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-487723/.minikube/cert.pem
	I1212 00:35:59.444790  530956 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22101-487723/.minikube/cert.pem (1123 bytes)
	I1212 00:35:59.444853  530956 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-487723/.minikube/key.pem, removing ...
	I1212 00:35:59.444856  530956 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-487723/.minikube/key.pem
	I1212 00:35:59.444879  530956 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22101-487723/.minikube/key.pem (1679 bytes)
	I1212 00:35:59.444932  530956 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22101-487723/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca-key.pem org=jenkins.functional-035643 san=[127.0.0.1 192.168.49.2 functional-035643 localhost minikube]
	I1212 00:35:59.773887  530956 provision.go:177] copyRemoteCerts
	I1212 00:35:59.773940  530956 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 00:35:59.773979  530956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
	I1212 00:35:59.792006  530956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/functional-035643/id_rsa Username:docker}
	I1212 00:35:59.898459  530956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 00:35:59.916125  530956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 00:35:59.934437  530956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 00:35:59.951804  530956 provision.go:87] duration metric: took 524.726096ms to configureAuth
	I1212 00:35:59.951820  530956 ubuntu.go:206] setting minikube options for container-runtime
	I1212 00:35:59.952018  530956 config.go:182] Loaded profile config "functional-035643": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 00:35:59.952114  530956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
	I1212 00:35:59.968939  530956 main.go:143] libmachine: Using SSH client type: native
	I1212 00:35:59.969228  530956 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1212 00:35:59.969239  530956 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 00:36:00.563754  530956 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 00:36:00.563766  530956 machine.go:97] duration metric: took 1.663381425s to provisionDockerMachine
	I1212 00:36:00.563776  530956 start.go:293] postStartSetup for "functional-035643" (driver="docker")
	I1212 00:36:00.563787  530956 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 00:36:00.563864  530956 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 00:36:00.563909  530956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
	I1212 00:36:00.587628  530956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/functional-035643/id_rsa Username:docker}
	I1212 00:36:00.694584  530956 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 00:36:00.698084  530956 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 00:36:00.698101  530956 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 00:36:00.698111  530956 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-487723/.minikube/addons for local assets ...
	I1212 00:36:00.698167  530956 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-487723/.minikube/files for local assets ...
	I1212 00:36:00.698253  530956 filesync.go:149] local asset: /home/jenkins/minikube-integration/22101-487723/.minikube/files/etc/ssl/certs/4909542.pem -> 4909542.pem in /etc/ssl/certs
	I1212 00:36:00.698337  530956 filesync.go:149] local asset: /home/jenkins/minikube-integration/22101-487723/.minikube/files/etc/test/nested/copy/490954/hosts -> hosts in /etc/test/nested/copy/490954
	I1212 00:36:00.698388  530956 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/490954
	I1212 00:36:00.706001  530956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/files/etc/ssl/certs/4909542.pem --> /etc/ssl/certs/4909542.pem (1708 bytes)
	I1212 00:36:00.723687  530956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/files/etc/test/nested/copy/490954/hosts --> /etc/test/nested/copy/490954/hosts (40 bytes)
	I1212 00:36:00.741785  530956 start.go:296] duration metric: took 177.995516ms for postStartSetup
	I1212 00:36:00.741883  530956 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 00:36:00.741922  530956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
	I1212 00:36:00.760230  530956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/functional-035643/id_rsa Username:docker}
	I1212 00:36:00.864012  530956 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 00:36:00.868713  530956 fix.go:56] duration metric: took 1.988777195s for fixHost
	I1212 00:36:00.868727  530956 start.go:83] releasing machines lock for "functional-035643", held for 1.988815594s
	I1212 00:36:00.868792  530956 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-035643
	I1212 00:36:00.885011  530956 ssh_runner.go:195] Run: cat /version.json
	I1212 00:36:00.885055  530956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
	I1212 00:36:00.885313  530956 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 00:36:00.885366  530956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
	I1212 00:36:00.906879  530956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/functional-035643/id_rsa Username:docker}
	I1212 00:36:00.908992  530956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/functional-035643/id_rsa Username:docker}
	I1212 00:36:01.113200  530956 ssh_runner.go:195] Run: systemctl --version
	I1212 00:36:01.120029  530956 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 00:36:01.159180  530956 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 00:36:01.163912  530956 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 00:36:01.163983  530956 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 00:36:01.172622  530956 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 00:36:01.172636  530956 start.go:496] detecting cgroup driver to use...
	I1212 00:36:01.172680  530956 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 00:36:01.172728  530956 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 00:36:01.189532  530956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 00:36:01.203890  530956 docker.go:218] disabling cri-docker service (if available) ...
	I1212 00:36:01.203963  530956 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 00:36:01.220816  530956 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 00:36:01.234536  530956 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 00:36:01.370158  530956 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 00:36:01.488527  530956 docker.go:234] disabling docker service ...
	I1212 00:36:01.488594  530956 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 00:36:01.503932  530956 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 00:36:01.516796  530956 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 00:36:01.637401  530956 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 00:36:01.761796  530956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 00:36:01.774534  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 00:36:01.788471  530956 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1212 00:36:01.788535  530956 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:36:01.797095  530956 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 00:36:01.797168  530956 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:36:01.806445  530956 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:36:01.815271  530956 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:36:01.824092  530956 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 00:36:01.832291  530956 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:36:01.841209  530956 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:36:01.851179  530956 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:36:01.859893  530956 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 00:36:01.867359  530956 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 00:36:01.874599  530956 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:36:01.993195  530956 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 00:36:02.173735  530956 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 00:36:02.173807  530956 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 00:36:02.177649  530956 start.go:564] Will wait 60s for crictl version
	I1212 00:36:02.177702  530956 ssh_runner.go:195] Run: which crictl
	I1212 00:36:02.181255  530956 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 00:36:02.206520  530956 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1212 00:36:02.206592  530956 ssh_runner.go:195] Run: crio --version
	I1212 00:36:02.236053  530956 ssh_runner.go:195] Run: crio --version
	I1212 00:36:02.270501  530956 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1212 00:36:02.273364  530956 cli_runner.go:164] Run: docker network inspect functional-035643 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 00:36:02.289602  530956 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1212 00:36:02.296412  530956 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1212 00:36:02.299311  530956 kubeadm.go:884] updating cluster {Name:functional-035643 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-035643 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 00:36:02.299467  530956 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1212 00:36:02.299536  530956 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:36:02.337479  530956 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 00:36:02.337493  530956 crio.go:433] Images already preloaded, skipping extraction
	I1212 00:36:02.337550  530956 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:36:02.363122  530956 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 00:36:02.363134  530956 cache_images.go:86] Images are preloaded, skipping loading
	I1212 00:36:02.363141  530956 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1212 00:36:02.363237  530956 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-035643 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-035643 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 00:36:02.363318  530956 ssh_runner.go:195] Run: crio config
	I1212 00:36:02.413513  530956 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1212 00:36:02.413532  530956 cni.go:84] Creating CNI manager for ""
	I1212 00:36:02.413540  530956 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 00:36:02.413548  530956 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 00:36:02.413569  530956 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-035643 NodeName:functional-035643 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfig
Opts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 00:36:02.413686  530956 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-035643"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 00:36:02.413753  530956 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 00:36:02.421266  530956 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 00:36:02.421324  530956 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 00:36:02.428464  530956 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1212 00:36:02.441052  530956 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1212 00:36:02.453157  530956 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2071 bytes)
	I1212 00:36:02.466066  530956 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1212 00:36:02.472532  530956 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:36:02.578480  530956 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:36:02.719058  530956 certs.go:69] Setting up /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643 for IP: 192.168.49.2
	I1212 00:36:02.719069  530956 certs.go:195] generating shared ca certs ...
	I1212 00:36:02.719086  530956 certs.go:227] acquiring lock for ca certs: {Name:mk856824cf2126fa3d2975ef18e195b6ab1234f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:36:02.719283  530956 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22101-487723/.minikube/ca.key
	I1212 00:36:02.719337  530956 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22101-487723/.minikube/proxy-client-ca.key
	I1212 00:36:02.719344  530956 certs.go:257] generating profile certs ...
	I1212 00:36:02.719449  530956 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/client.key
	I1212 00:36:02.719541  530956 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/apiserver.key.8a9a2493
	I1212 00:36:02.719585  530956 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/proxy-client.key
	I1212 00:36:02.719735  530956 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/490954.pem (1338 bytes)
	W1212 00:36:02.719767  530956 certs.go:480] ignoring /home/jenkins/minikube-integration/22101-487723/.minikube/certs/490954_empty.pem, impossibly tiny 0 bytes
	I1212 00:36:02.719779  530956 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca-key.pem (1679 bytes)
	I1212 00:36:02.719809  530956 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca.pem (1078 bytes)
	I1212 00:36:02.719833  530956 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/cert.pem (1123 bytes)
	I1212 00:36:02.719859  530956 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/key.pem (1679 bytes)
	I1212 00:36:02.719902  530956 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-487723/.minikube/files/etc/ssl/certs/4909542.pem (1708 bytes)
	I1212 00:36:02.720656  530956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 00:36:02.742914  530956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 00:36:02.761747  530956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 00:36:02.779250  530956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 00:36:02.796535  530956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1212 00:36:02.813979  530956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 00:36:02.832344  530956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 00:36:02.850165  530956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 00:36:02.867847  530956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/files/etc/ssl/certs/4909542.pem --> /usr/share/ca-certificates/4909542.pem (1708 bytes)
	I1212 00:36:02.887774  530956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 00:36:02.905148  530956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/certs/490954.pem --> /usr/share/ca-certificates/490954.pem (1338 bytes)
	I1212 00:36:02.923137  530956 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 00:36:02.936200  530956 ssh_runner.go:195] Run: openssl version
	I1212 00:36:02.943771  530956 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/490954.pem
	I1212 00:36:02.951677  530956 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/490954.pem /etc/ssl/certs/490954.pem
	I1212 00:36:02.959104  530956 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/490954.pem
	I1212 00:36:02.962881  530956 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 00:21 /usr/share/ca-certificates/490954.pem
	I1212 00:36:02.962937  530956 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/490954.pem
	I1212 00:36:03.006038  530956 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 00:36:03.014202  530956 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4909542.pem
	I1212 00:36:03.022168  530956 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4909542.pem /etc/ssl/certs/4909542.pem
	I1212 00:36:03.030174  530956 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4909542.pem
	I1212 00:36:03.033892  530956 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 00:21 /usr/share/ca-certificates/4909542.pem
	I1212 00:36:03.033949  530956 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4909542.pem
	I1212 00:36:03.075143  530956 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 00:36:03.082587  530956 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:36:03.089740  530956 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 00:36:03.097209  530956 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:36:03.100982  530956 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 00:11 /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:36:03.101039  530956 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:36:03.141961  530956 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 00:36:03.149082  530956 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 00:36:03.152710  530956 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 00:36:03.193308  530956 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 00:36:03.236349  530956 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 00:36:03.279368  530956 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 00:36:03.320758  530956 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 00:36:03.362313  530956 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 00:36:03.403564  530956 kubeadm.go:401] StartCluster: {Name:functional-035643 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-035643 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:36:03.403639  530956 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 00:36:03.403697  530956 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 00:36:03.429883  530956 cri.go:89] found id: ""
	I1212 00:36:03.429959  530956 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 00:36:03.437518  530956 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1212 00:36:03.437528  530956 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1212 00:36:03.437580  530956 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 00:36:03.444705  530956 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:36:03.445211  530956 kubeconfig.go:125] found "functional-035643" server: "https://192.168.49.2:8441"
	I1212 00:36:03.446485  530956 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 00:36:03.453928  530956 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-12 00:21:24.717912452 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-12 00:36:02.461560447 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1212 00:36:03.453947  530956 kubeadm.go:1161] stopping kube-system containers ...
	I1212 00:36:03.453959  530956 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1212 00:36:03.454013  530956 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 00:36:03.481725  530956 cri.go:89] found id: ""
	I1212 00:36:03.481784  530956 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 00:36:03.499216  530956 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 00:36:03.507872  530956 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec 12 00:25 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5640 Dec 12 00:25 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec 12 00:25 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec 12 00:25 /etc/kubernetes/scheduler.conf
	
	I1212 00:36:03.507966  530956 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1212 00:36:03.516663  530956 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1212 00:36:03.524482  530956 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:36:03.524541  530956 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 00:36:03.532121  530956 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1212 00:36:03.539690  530956 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:36:03.539749  530956 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 00:36:03.547386  530956 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1212 00:36:03.555458  530956 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:36:03.555515  530956 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 00:36:03.563050  530956 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 00:36:03.570932  530956 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 00:36:03.615951  530956 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 00:36:05.017170  530956 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.401194576s)
	I1212 00:36:05.017241  530956 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 00:36:05.218047  530956 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 00:36:05.283161  530956 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 00:36:05.326722  530956 api_server.go:52] waiting for apiserver process to appear ...
	I1212 00:36:05.326794  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:05.827661  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:06.327088  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:06.826877  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:07.327696  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:07.827369  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:08.326870  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:08.827729  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:09.327318  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:09.826994  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:10.326971  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:10.827897  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:11.327939  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:11.827793  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:12.327004  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:12.826881  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:13.327847  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:13.827583  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:14.326945  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:14.827753  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:15.327041  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:15.826900  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:16.327889  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:16.827790  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:17.327551  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:17.826998  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:18.326988  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:18.827579  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:19.326992  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:19.827594  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:20.326867  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:20.827772  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:21.327317  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:21.827925  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:22.327001  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:22.826975  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:23.326960  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:23.826929  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:24.327674  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:24.827495  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:25.326930  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:25.827519  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:26.327962  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:26.827658  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:27.327532  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:27.826969  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:28.327685  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:28.827358  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:29.327926  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:29.826958  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:30.327782  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:30.827884  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:31.327105  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:31.826992  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:32.327681  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:32.827190  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:33.327886  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:33.827647  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:34.327016  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:34.827023  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:35.326882  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:35.827613  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:36.327026  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:36.827579  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:37.327817  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:37.827703  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:38.326889  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:38.827613  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:39.326979  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:39.827741  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:40.327124  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:40.827016  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:41.327889  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:41.827782  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:42.327587  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:42.827812  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:43.327751  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:43.826992  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:44.326981  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:44.826901  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:45.327712  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:45.826871  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:46.327774  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:46.827801  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:47.326976  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:47.827799  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:48.326988  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:48.827011  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:49.326914  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:49.826960  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:50.326958  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:50.827662  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:51.326988  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:51.826946  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:52.327667  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:52.827358  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:53.327906  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:53.827656  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:54.327035  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:54.827870  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:55.327685  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:55.827299  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:56.327742  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:56.827766  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:57.327012  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:57.827860  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:58.326990  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:58.827280  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:59.327889  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:59.826878  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:00.327933  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:00.826966  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:01.327339  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:01.827905  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:02.327586  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:02.827346  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:03.326967  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:03.827912  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:04.327657  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:04.827730  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:05.326940  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:37:05.327024  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:37:05.355488  530956 cri.go:89] found id: ""
	I1212 00:37:05.355502  530956 logs.go:282] 0 containers: []
	W1212 00:37:05.355509  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:37:05.355514  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:37:05.355580  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:37:05.379984  530956 cri.go:89] found id: ""
	I1212 00:37:05.379998  530956 logs.go:282] 0 containers: []
	W1212 00:37:05.380005  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:37:05.380010  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:37:05.380068  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:37:05.404986  530956 cri.go:89] found id: ""
	I1212 00:37:05.405001  530956 logs.go:282] 0 containers: []
	W1212 00:37:05.405010  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:37:05.405015  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:37:05.405072  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:37:05.429349  530956 cri.go:89] found id: ""
	I1212 00:37:05.429363  530956 logs.go:282] 0 containers: []
	W1212 00:37:05.429370  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:37:05.429375  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:37:05.429438  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:37:05.453950  530956 cri.go:89] found id: ""
	I1212 00:37:05.453963  530956 logs.go:282] 0 containers: []
	W1212 00:37:05.453970  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:37:05.453975  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:37:05.454030  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:37:05.481105  530956 cri.go:89] found id: ""
	I1212 00:37:05.481118  530956 logs.go:282] 0 containers: []
	W1212 00:37:05.481126  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:37:05.481131  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:37:05.481188  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:37:05.506041  530956 cri.go:89] found id: ""
	I1212 00:37:05.506054  530956 logs.go:282] 0 containers: []
	W1212 00:37:05.506062  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:37:05.506069  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:37:05.506079  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:37:05.575208  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:37:05.575226  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:37:05.602842  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:37:05.602858  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:37:05.674408  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:37:05.674425  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:37:05.688466  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:37:05.688482  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:37:05.756639  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:37:05.748526   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:05.749193   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:05.750701   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:05.751299   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:05.752883   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:37:05.748526   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:05.749193   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:05.750701   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:05.751299   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:05.752883   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:37:08.256849  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:08.268489  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:37:08.268547  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:37:08.294558  530956 cri.go:89] found id: ""
	I1212 00:37:08.294571  530956 logs.go:282] 0 containers: []
	W1212 00:37:08.294578  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:37:08.294583  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:37:08.294647  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:37:08.324264  530956 cri.go:89] found id: ""
	I1212 00:37:08.324277  530956 logs.go:282] 0 containers: []
	W1212 00:37:08.324284  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:37:08.324289  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:37:08.324345  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:37:08.349672  530956 cri.go:89] found id: ""
	I1212 00:37:08.349685  530956 logs.go:282] 0 containers: []
	W1212 00:37:08.349692  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:37:08.349697  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:37:08.349755  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:37:08.375495  530956 cri.go:89] found id: ""
	I1212 00:37:08.375509  530956 logs.go:282] 0 containers: []
	W1212 00:37:08.375516  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:37:08.375521  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:37:08.375579  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:37:08.405282  530956 cri.go:89] found id: ""
	I1212 00:37:08.405305  530956 logs.go:282] 0 containers: []
	W1212 00:37:08.405312  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:37:08.405317  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:37:08.405384  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:37:08.431165  530956 cri.go:89] found id: ""
	I1212 00:37:08.431178  530956 logs.go:282] 0 containers: []
	W1212 00:37:08.431185  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:37:08.431190  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:37:08.431255  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:37:08.456458  530956 cri.go:89] found id: ""
	I1212 00:37:08.456472  530956 logs.go:282] 0 containers: []
	W1212 00:37:08.456479  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:37:08.456487  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:37:08.456498  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:37:08.470633  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:37:08.470647  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:37:08.537226  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:37:08.528672   11079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:08.529056   11079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:08.530703   11079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:08.531301   11079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:08.532944   11079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:37:08.528672   11079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:08.529056   11079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:08.530703   11079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:08.531301   11079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:08.532944   11079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:37:08.537245  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:37:08.537256  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:37:08.606512  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:37:08.606534  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:37:08.634126  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:37:08.634142  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:37:11.201712  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:11.211510  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:37:11.211571  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:37:11.249104  530956 cri.go:89] found id: ""
	I1212 00:37:11.249118  530956 logs.go:282] 0 containers: []
	W1212 00:37:11.249135  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:37:11.249141  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:37:11.249214  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:37:11.285113  530956 cri.go:89] found id: ""
	I1212 00:37:11.285132  530956 logs.go:282] 0 containers: []
	W1212 00:37:11.285143  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:37:11.285148  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:37:11.285218  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:37:11.315788  530956 cri.go:89] found id: ""
	I1212 00:37:11.315802  530956 logs.go:282] 0 containers: []
	W1212 00:37:11.315809  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:37:11.315814  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:37:11.315875  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:37:11.346544  530956 cri.go:89] found id: ""
	I1212 00:37:11.346558  530956 logs.go:282] 0 containers: []
	W1212 00:37:11.346565  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:37:11.346571  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:37:11.346629  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:37:11.376168  530956 cri.go:89] found id: ""
	I1212 00:37:11.376192  530956 logs.go:282] 0 containers: []
	W1212 00:37:11.376199  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:37:11.376205  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:37:11.376274  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:37:11.401416  530956 cri.go:89] found id: ""
	I1212 00:37:11.401430  530956 logs.go:282] 0 containers: []
	W1212 00:37:11.401437  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:37:11.401442  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:37:11.401501  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:37:11.426005  530956 cri.go:89] found id: ""
	I1212 00:37:11.426019  530956 logs.go:282] 0 containers: []
	W1212 00:37:11.426026  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:37:11.426034  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:37:11.426044  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:37:11.440817  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:37:11.440832  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:37:11.505805  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:37:11.496652   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:11.497359   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:11.499136   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:11.499679   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:11.501366   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:37:11.496652   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:11.497359   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:11.499136   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:11.499679   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:11.501366   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:37:11.505819  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:37:11.505832  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:37:11.581171  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:37:11.581192  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:37:11.614667  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:37:11.614699  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:37:14.182453  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:14.192683  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:37:14.192743  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:37:14.224011  530956 cri.go:89] found id: ""
	I1212 00:37:14.224025  530956 logs.go:282] 0 containers: []
	W1212 00:37:14.224032  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:37:14.224037  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:37:14.224097  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:37:14.253937  530956 cri.go:89] found id: ""
	I1212 00:37:14.253951  530956 logs.go:282] 0 containers: []
	W1212 00:37:14.253958  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:37:14.253963  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:37:14.254034  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:37:14.291025  530956 cri.go:89] found id: ""
	I1212 00:37:14.291039  530956 logs.go:282] 0 containers: []
	W1212 00:37:14.291047  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:37:14.291057  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:37:14.291117  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:37:14.318045  530956 cri.go:89] found id: ""
	I1212 00:37:14.318059  530956 logs.go:282] 0 containers: []
	W1212 00:37:14.318066  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:37:14.318072  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:37:14.318133  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:37:14.345053  530956 cri.go:89] found id: ""
	I1212 00:37:14.345074  530956 logs.go:282] 0 containers: []
	W1212 00:37:14.345082  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:37:14.345087  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:37:14.345151  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:37:14.370315  530956 cri.go:89] found id: ""
	I1212 00:37:14.370328  530956 logs.go:282] 0 containers: []
	W1212 00:37:14.370335  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:37:14.370340  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:37:14.370397  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:37:14.400128  530956 cri.go:89] found id: ""
	I1212 00:37:14.400142  530956 logs.go:282] 0 containers: []
	W1212 00:37:14.400149  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:37:14.400156  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:37:14.400166  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:37:14.469510  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:37:14.469528  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:37:14.497946  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:37:14.497962  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:37:14.567259  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:37:14.567276  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:37:14.581753  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:37:14.581768  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:37:14.649334  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:37:14.641152   11304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:14.642071   11304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:14.643581   11304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:14.644076   11304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:14.645435   11304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:37:14.641152   11304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:14.642071   11304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:14.643581   11304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:14.644076   11304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:14.645435   11304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:37:17.151022  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:17.161375  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:37:17.161433  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:37:17.187128  530956 cri.go:89] found id: ""
	I1212 00:37:17.187144  530956 logs.go:282] 0 containers: []
	W1212 00:37:17.187151  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:37:17.187157  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:37:17.187224  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:37:17.212545  530956 cri.go:89] found id: ""
	I1212 00:37:17.212560  530956 logs.go:282] 0 containers: []
	W1212 00:37:17.212567  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:37:17.212573  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:37:17.212632  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:37:17.239817  530956 cri.go:89] found id: ""
	I1212 00:37:17.239831  530956 logs.go:282] 0 containers: []
	W1212 00:37:17.239838  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:37:17.239843  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:37:17.239900  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:37:17.267133  530956 cri.go:89] found id: ""
	I1212 00:37:17.267147  530956 logs.go:282] 0 containers: []
	W1212 00:37:17.267155  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:37:17.267160  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:37:17.267232  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:37:17.304534  530956 cri.go:89] found id: ""
	I1212 00:37:17.304548  530956 logs.go:282] 0 containers: []
	W1212 00:37:17.304554  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:37:17.304559  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:37:17.304618  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:37:17.330052  530956 cri.go:89] found id: ""
	I1212 00:37:17.330066  530956 logs.go:282] 0 containers: []
	W1212 00:37:17.330073  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:37:17.330078  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:37:17.330133  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:37:17.354652  530956 cri.go:89] found id: ""
	I1212 00:37:17.354671  530956 logs.go:282] 0 containers: []
	W1212 00:37:17.354678  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:37:17.354705  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:37:17.354715  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:37:17.421755  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:37:17.412804   11395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:17.413382   11395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:17.415079   11395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:17.415827   11395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:17.417552   11395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:37:17.412804   11395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:17.413382   11395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:17.415079   11395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:17.415827   11395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:17.417552   11395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:37:17.421766  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:37:17.421779  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:37:17.496810  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:37:17.496835  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:37:17.525867  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:37:17.525886  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:37:17.594454  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:37:17.594475  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:37:20.109774  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:20.119858  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:37:20.119916  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:37:20.148052  530956 cri.go:89] found id: ""
	I1212 00:37:20.148066  530956 logs.go:282] 0 containers: []
	W1212 00:37:20.148073  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:37:20.148078  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:37:20.148138  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:37:20.172308  530956 cri.go:89] found id: ""
	I1212 00:37:20.172322  530956 logs.go:282] 0 containers: []
	W1212 00:37:20.172329  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:37:20.172334  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:37:20.172392  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:37:20.200721  530956 cri.go:89] found id: ""
	I1212 00:37:20.200735  530956 logs.go:282] 0 containers: []
	W1212 00:37:20.200743  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:37:20.200748  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:37:20.200807  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:37:20.232123  530956 cri.go:89] found id: ""
	I1212 00:37:20.232136  530956 logs.go:282] 0 containers: []
	W1212 00:37:20.232143  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:37:20.232148  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:37:20.232207  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:37:20.263625  530956 cri.go:89] found id: ""
	I1212 00:37:20.263638  530956 logs.go:282] 0 containers: []
	W1212 00:37:20.263646  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:37:20.263651  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:37:20.263710  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:37:20.292234  530956 cri.go:89] found id: ""
	I1212 00:37:20.292248  530956 logs.go:282] 0 containers: []
	W1212 00:37:20.292255  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:37:20.292260  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:37:20.292319  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:37:20.316784  530956 cri.go:89] found id: ""
	I1212 00:37:20.316798  530956 logs.go:282] 0 containers: []
	W1212 00:37:20.316804  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:37:20.316812  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:37:20.316822  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:37:20.382530  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:37:20.382550  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:37:20.397572  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:37:20.397587  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:37:20.462516  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:37:20.453480   11508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:20.454137   11508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:20.455857   11508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:20.456349   11508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:20.458004   11508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:37:20.453480   11508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:20.454137   11508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:20.455857   11508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:20.456349   11508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:20.458004   11508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:37:20.462526  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:37:20.462536  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:37:20.536302  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:37:20.536323  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:37:23.067516  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:23.077747  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:37:23.077816  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:37:23.102753  530956 cri.go:89] found id: ""
	I1212 00:37:23.102767  530956 logs.go:282] 0 containers: []
	W1212 00:37:23.102774  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:37:23.102780  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:37:23.102845  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:37:23.128706  530956 cri.go:89] found id: ""
	I1212 00:37:23.128719  530956 logs.go:282] 0 containers: []
	W1212 00:37:23.128727  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:37:23.128732  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:37:23.128792  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:37:23.154481  530956 cri.go:89] found id: ""
	I1212 00:37:23.154495  530956 logs.go:282] 0 containers: []
	W1212 00:37:23.154502  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:37:23.154507  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:37:23.154572  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:37:23.179609  530956 cri.go:89] found id: ""
	I1212 00:37:23.179622  530956 logs.go:282] 0 containers: []
	W1212 00:37:23.179630  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:37:23.179635  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:37:23.179699  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:37:23.205151  530956 cri.go:89] found id: ""
	I1212 00:37:23.205165  530956 logs.go:282] 0 containers: []
	W1212 00:37:23.205172  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:37:23.205177  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:37:23.205238  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:37:23.242297  530956 cri.go:89] found id: ""
	I1212 00:37:23.242312  530956 logs.go:282] 0 containers: []
	W1212 00:37:23.242319  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:37:23.242324  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:37:23.242393  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:37:23.271432  530956 cri.go:89] found id: ""
	I1212 00:37:23.271446  530956 logs.go:282] 0 containers: []
	W1212 00:37:23.271453  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:37:23.271461  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:37:23.271472  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:37:23.339885  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:37:23.339904  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:37:23.355098  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:37:23.355115  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:37:23.419229  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:37:23.410980   11610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:23.411565   11610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:23.413072   11610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:23.413369   11610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:23.415026   11610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:37:23.410980   11610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:23.411565   11610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:23.413072   11610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:23.413369   11610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:23.415026   11610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:37:23.419240  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:37:23.419250  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:37:23.486458  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:37:23.486478  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:37:26.021866  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:26.032710  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:37:26.032772  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:37:26.058774  530956 cri.go:89] found id: ""
	I1212 00:37:26.058811  530956 logs.go:282] 0 containers: []
	W1212 00:37:26.058818  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:37:26.058824  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:37:26.058887  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:37:26.084731  530956 cri.go:89] found id: ""
	I1212 00:37:26.084746  530956 logs.go:282] 0 containers: []
	W1212 00:37:26.084753  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:37:26.084758  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:37:26.084821  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:37:26.110515  530956 cri.go:89] found id: ""
	I1212 00:37:26.110529  530956 logs.go:282] 0 containers: []
	W1212 00:37:26.110536  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:37:26.110541  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:37:26.110598  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:37:26.137082  530956 cri.go:89] found id: ""
	I1212 00:37:26.137095  530956 logs.go:282] 0 containers: []
	W1212 00:37:26.137103  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:37:26.137112  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:37:26.137172  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:37:26.162724  530956 cri.go:89] found id: ""
	I1212 00:37:26.162738  530956 logs.go:282] 0 containers: []
	W1212 00:37:26.162745  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:37:26.162751  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:37:26.162818  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:37:26.188538  530956 cri.go:89] found id: ""
	I1212 00:37:26.188559  530956 logs.go:282] 0 containers: []
	W1212 00:37:26.188566  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:37:26.188571  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:37:26.188630  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:37:26.219848  530956 cri.go:89] found id: ""
	I1212 00:37:26.219862  530956 logs.go:282] 0 containers: []
	W1212 00:37:26.219869  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:37:26.219876  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:37:26.219887  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:37:26.291444  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:37:26.291463  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:37:26.306938  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:37:26.306954  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:37:26.368571  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:37:26.360215   11716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:26.360983   11716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:26.362459   11716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:26.362990   11716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:26.364656   11716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:37:26.360215   11716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:26.360983   11716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:26.362459   11716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:26.362990   11716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:26.364656   11716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:37:26.368581  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:37:26.368593  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:37:26.436229  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:37:26.436247  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:37:28.966999  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:28.976928  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:37:28.976991  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:37:29.003108  530956 cri.go:89] found id: ""
	I1212 00:37:29.003123  530956 logs.go:282] 0 containers: []
	W1212 00:37:29.003130  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:37:29.003136  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:37:29.003212  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:37:29.028803  530956 cri.go:89] found id: ""
	I1212 00:37:29.028817  530956 logs.go:282] 0 containers: []
	W1212 00:37:29.028824  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:37:29.028828  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:37:29.028885  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:37:29.056738  530956 cri.go:89] found id: ""
	I1212 00:37:29.056758  530956 logs.go:282] 0 containers: []
	W1212 00:37:29.056765  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:37:29.056770  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:37:29.056828  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:37:29.081270  530956 cri.go:89] found id: ""
	I1212 00:37:29.081284  530956 logs.go:282] 0 containers: []
	W1212 00:37:29.081291  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:37:29.081297  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:37:29.081354  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:37:29.106545  530956 cri.go:89] found id: ""
	I1212 00:37:29.106559  530956 logs.go:282] 0 containers: []
	W1212 00:37:29.106566  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:37:29.106571  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:37:29.106629  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:37:29.133248  530956 cri.go:89] found id: ""
	I1212 00:37:29.133262  530956 logs.go:282] 0 containers: []
	W1212 00:37:29.133270  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:37:29.133275  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:37:29.133335  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:37:29.162606  530956 cri.go:89] found id: ""
	I1212 00:37:29.162620  530956 logs.go:282] 0 containers: []
	W1212 00:37:29.162627  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:37:29.162634  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:37:29.162645  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:37:29.228360  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:37:29.228380  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:37:29.244576  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:37:29.244593  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:37:29.318498  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:37:29.310629   11815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:29.311117   11815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:29.312690   11815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:29.313032   11815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:29.314613   11815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:37:29.310629   11815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:29.311117   11815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:29.312690   11815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:29.313032   11815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:29.314613   11815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:37:29.318508  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:37:29.318519  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:37:29.386989  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:37:29.387009  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:37:31.922335  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:31.932487  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:37:31.932555  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:37:31.958330  530956 cri.go:89] found id: ""
	I1212 00:37:31.958344  530956 logs.go:282] 0 containers: []
	W1212 00:37:31.958351  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:37:31.958356  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:37:31.958413  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:37:31.986166  530956 cri.go:89] found id: ""
	I1212 00:37:31.986184  530956 logs.go:282] 0 containers: []
	W1212 00:37:31.986193  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:37:31.986198  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:37:31.986263  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:37:32.018215  530956 cri.go:89] found id: ""
	I1212 00:37:32.018229  530956 logs.go:282] 0 containers: []
	W1212 00:37:32.018236  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:37:32.018241  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:37:32.018309  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:37:32.045496  530956 cri.go:89] found id: ""
	I1212 00:37:32.045510  530956 logs.go:282] 0 containers: []
	W1212 00:37:32.045526  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:37:32.045531  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:37:32.045599  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:37:32.071713  530956 cri.go:89] found id: ""
	I1212 00:37:32.071727  530956 logs.go:282] 0 containers: []
	W1212 00:37:32.071733  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:37:32.071748  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:37:32.071809  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:37:32.096398  530956 cri.go:89] found id: ""
	I1212 00:37:32.096412  530956 logs.go:282] 0 containers: []
	W1212 00:37:32.096419  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:37:32.096424  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:37:32.096481  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:37:32.121995  530956 cri.go:89] found id: ""
	I1212 00:37:32.122009  530956 logs.go:282] 0 containers: []
	W1212 00:37:32.122016  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:37:32.122024  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:37:32.122033  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:37:32.187537  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:37:32.187556  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:37:32.202073  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:37:32.202088  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:37:32.283678  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:37:32.269254   11915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:32.269661   11915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:32.271076   11915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:32.271701   11915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:32.275343   11915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:37:32.269254   11915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:32.269661   11915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:32.271076   11915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:32.271701   11915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:32.275343   11915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:37:32.283688  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:37:32.283699  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:37:32.352426  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:37:32.352446  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:37:34.887315  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:34.897374  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:37:34.897440  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:37:34.922626  530956 cri.go:89] found id: ""
	I1212 00:37:34.922641  530956 logs.go:282] 0 containers: []
	W1212 00:37:34.922648  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:37:34.922654  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:37:34.922741  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:37:34.948176  530956 cri.go:89] found id: ""
	I1212 00:37:34.948190  530956 logs.go:282] 0 containers: []
	W1212 00:37:34.948199  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:37:34.948204  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:37:34.948302  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:37:34.975855  530956 cri.go:89] found id: ""
	I1212 00:37:34.975869  530956 logs.go:282] 0 containers: []
	W1212 00:37:34.975883  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:37:34.975889  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:37:34.975954  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:37:35.008030  530956 cri.go:89] found id: ""
	I1212 00:37:35.008046  530956 logs.go:282] 0 containers: []
	W1212 00:37:35.008054  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:37:35.008060  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:37:35.008144  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:37:35.033803  530956 cri.go:89] found id: ""
	I1212 00:37:35.033816  530956 logs.go:282] 0 containers: []
	W1212 00:37:35.033823  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:37:35.033828  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:37:35.033887  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:37:35.059521  530956 cri.go:89] found id: ""
	I1212 00:37:35.059535  530956 logs.go:282] 0 containers: []
	W1212 00:37:35.059542  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:37:35.059547  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:37:35.059604  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:37:35.084378  530956 cri.go:89] found id: ""
	I1212 00:37:35.084392  530956 logs.go:282] 0 containers: []
	W1212 00:37:35.084399  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:37:35.084406  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:37:35.084416  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:37:35.150144  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:37:35.150166  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:37:35.164295  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:37:35.164311  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:37:35.237720  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:37:35.229555   12019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:35.230202   12019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:35.231874   12019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:35.232277   12019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:35.233798   12019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:37:35.229555   12019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:35.230202   12019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:35.231874   12019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:35.232277   12019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:35.233798   12019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:37:35.237730  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:37:35.237740  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:37:35.309700  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:37:35.309721  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:37:37.842191  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:37.852127  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:37:37.852198  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:37:37.883852  530956 cri.go:89] found id: ""
	I1212 00:37:37.883866  530956 logs.go:282] 0 containers: []
	W1212 00:37:37.883873  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:37:37.883879  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:37:37.883940  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:37:37.908974  530956 cri.go:89] found id: ""
	I1212 00:37:37.908988  530956 logs.go:282] 0 containers: []
	W1212 00:37:37.908995  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:37:37.909000  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:37:37.909058  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:37:37.934558  530956 cri.go:89] found id: ""
	I1212 00:37:37.934581  530956 logs.go:282] 0 containers: []
	W1212 00:37:37.934588  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:37:37.934593  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:37:37.934659  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:37:37.960620  530956 cri.go:89] found id: ""
	I1212 00:37:37.960634  530956 logs.go:282] 0 containers: []
	W1212 00:37:37.960641  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:37:37.960653  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:37:37.960716  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:37:37.985545  530956 cri.go:89] found id: ""
	I1212 00:37:37.985559  530956 logs.go:282] 0 containers: []
	W1212 00:37:37.985566  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:37:37.985571  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:37:37.985649  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:37:38.019481  530956 cri.go:89] found id: ""
	I1212 00:37:38.019496  530956 logs.go:282] 0 containers: []
	W1212 00:37:38.019511  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:37:38.019517  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:37:38.019587  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:37:38.050591  530956 cri.go:89] found id: ""
	I1212 00:37:38.050606  530956 logs.go:282] 0 containers: []
	W1212 00:37:38.050613  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:37:38.050621  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:37:38.050631  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:37:38.118052  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:37:38.118073  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:37:38.133136  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:37:38.133152  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:37:38.195824  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:37:38.187464   12120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:38.188136   12120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:38.189863   12120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:38.190376   12120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:38.191908   12120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:37:38.187464   12120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:38.188136   12120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:38.189863   12120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:38.190376   12120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:38.191908   12120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:37:38.195836  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:37:38.195847  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:37:38.277789  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:37:38.277816  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:37:40.807649  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:40.817759  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:37:40.817820  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:37:40.843061  530956 cri.go:89] found id: ""
	I1212 00:37:40.843075  530956 logs.go:282] 0 containers: []
	W1212 00:37:40.843082  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:37:40.843087  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:37:40.843147  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:37:40.867922  530956 cri.go:89] found id: ""
	I1212 00:37:40.867936  530956 logs.go:282] 0 containers: []
	W1212 00:37:40.867944  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:37:40.867949  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:37:40.868005  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:37:40.892630  530956 cri.go:89] found id: ""
	I1212 00:37:40.892644  530956 logs.go:282] 0 containers: []
	W1212 00:37:40.892653  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:37:40.892657  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:37:40.892716  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:37:40.918166  530956 cri.go:89] found id: ""
	I1212 00:37:40.918180  530956 logs.go:282] 0 containers: []
	W1212 00:37:40.918187  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:37:40.918192  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:37:40.918250  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:37:40.944075  530956 cri.go:89] found id: ""
	I1212 00:37:40.944088  530956 logs.go:282] 0 containers: []
	W1212 00:37:40.944095  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:37:40.944100  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:37:40.944160  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:37:40.969320  530956 cri.go:89] found id: ""
	I1212 00:37:40.969333  530956 logs.go:282] 0 containers: []
	W1212 00:37:40.969340  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:37:40.969346  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:37:40.969405  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:37:40.997473  530956 cri.go:89] found id: ""
	I1212 00:37:40.997487  530956 logs.go:282] 0 containers: []
	W1212 00:37:40.997494  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:37:40.997501  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:37:40.997512  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:37:41.028728  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:37:41.028743  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:37:41.095087  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:37:41.095107  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:37:41.109485  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:37:41.109501  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:37:41.176844  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:37:41.166571   12234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:41.167470   12234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:41.170826   12234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:41.171336   12234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:41.172874   12234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:37:41.166571   12234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:41.167470   12234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:41.170826   12234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:41.171336   12234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:41.172874   12234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:37:41.176853  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:37:41.176864  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:37:43.749966  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:43.760058  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:37:43.760118  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:37:43.785533  530956 cri.go:89] found id: ""
	I1212 00:37:43.785546  530956 logs.go:282] 0 containers: []
	W1212 00:37:43.785554  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:37:43.785559  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:37:43.785616  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:37:43.812938  530956 cri.go:89] found id: ""
	I1212 00:37:43.812952  530956 logs.go:282] 0 containers: []
	W1212 00:37:43.812960  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:37:43.812964  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:37:43.813029  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:37:43.838583  530956 cri.go:89] found id: ""
	I1212 00:37:43.838596  530956 logs.go:282] 0 containers: []
	W1212 00:37:43.838604  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:37:43.838609  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:37:43.838669  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:37:43.864548  530956 cri.go:89] found id: ""
	I1212 00:37:43.864562  530956 logs.go:282] 0 containers: []
	W1212 00:37:43.864569  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:37:43.864574  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:37:43.864633  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:37:43.889391  530956 cri.go:89] found id: ""
	I1212 00:37:43.889405  530956 logs.go:282] 0 containers: []
	W1212 00:37:43.889412  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:37:43.889417  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:37:43.889478  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:37:43.914183  530956 cri.go:89] found id: ""
	I1212 00:37:43.914196  530956 logs.go:282] 0 containers: []
	W1212 00:37:43.914203  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:37:43.914209  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:37:43.914268  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:37:43.941097  530956 cri.go:89] found id: ""
	I1212 00:37:43.941112  530956 logs.go:282] 0 containers: []
	W1212 00:37:43.941119  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:37:43.941126  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:37:43.941136  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:37:44.007607  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:37:44.007625  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:37:44.022976  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:37:44.022993  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:37:44.087167  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:37:44.078521   12330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:44.079166   12330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:44.080973   12330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:44.081418   12330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:44.083213   12330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:37:44.078521   12330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:44.079166   12330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:44.080973   12330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:44.081418   12330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:44.083213   12330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:37:44.087177  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:37:44.087190  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:37:44.156045  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:37:44.156065  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:37:46.684537  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:46.694320  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:37:46.694383  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:37:46.718727  530956 cri.go:89] found id: ""
	I1212 00:37:46.718741  530956 logs.go:282] 0 containers: []
	W1212 00:37:46.718751  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:37:46.718756  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:37:46.718832  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:37:46.744753  530956 cri.go:89] found id: ""
	I1212 00:37:46.744767  530956 logs.go:282] 0 containers: []
	W1212 00:37:46.744774  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:37:46.744779  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:37:46.744838  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:37:46.773525  530956 cri.go:89] found id: ""
	I1212 00:37:46.773538  530956 logs.go:282] 0 containers: []
	W1212 00:37:46.773546  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:37:46.773551  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:37:46.773608  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:37:46.798518  530956 cri.go:89] found id: ""
	I1212 00:37:46.798532  530956 logs.go:282] 0 containers: []
	W1212 00:37:46.798539  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:37:46.798544  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:37:46.798602  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:37:46.822867  530956 cri.go:89] found id: ""
	I1212 00:37:46.822880  530956 logs.go:282] 0 containers: []
	W1212 00:37:46.822887  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:37:46.822893  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:37:46.822949  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:37:46.849825  530956 cri.go:89] found id: ""
	I1212 00:37:46.849839  530956 logs.go:282] 0 containers: []
	W1212 00:37:46.849846  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:37:46.849851  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:37:46.849909  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:37:46.874986  530956 cri.go:89] found id: ""
	I1212 00:37:46.874999  530956 logs.go:282] 0 containers: []
	W1212 00:37:46.875011  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:37:46.875019  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:37:46.875030  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:37:46.939887  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:37:46.931753   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:46.932307   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:46.933826   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:46.934346   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:46.936019   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:37:46.931753   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:46.932307   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:46.933826   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:46.934346   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:46.936019   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:37:46.939896  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:37:46.939909  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:37:47.008024  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:37:47.008044  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:37:47.036373  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:37:47.036388  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:37:47.101329  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:37:47.101347  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:37:49.616038  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:49.626178  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:37:49.626240  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:37:49.652682  530956 cri.go:89] found id: ""
	I1212 00:37:49.652696  530956 logs.go:282] 0 containers: []
	W1212 00:37:49.652703  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:37:49.652708  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:37:49.652766  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:37:49.679170  530956 cri.go:89] found id: ""
	I1212 00:37:49.679185  530956 logs.go:282] 0 containers: []
	W1212 00:37:49.679191  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:37:49.679197  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:37:49.679256  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:37:49.706504  530956 cri.go:89] found id: ""
	I1212 00:37:49.706518  530956 logs.go:282] 0 containers: []
	W1212 00:37:49.706526  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:37:49.706532  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:37:49.706592  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:37:49.732201  530956 cri.go:89] found id: ""
	I1212 00:37:49.732215  530956 logs.go:282] 0 containers: []
	W1212 00:37:49.732222  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:37:49.732227  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:37:49.732287  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:37:49.757094  530956 cri.go:89] found id: ""
	I1212 00:37:49.757107  530956 logs.go:282] 0 containers: []
	W1212 00:37:49.757115  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:37:49.757119  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:37:49.757178  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:37:49.785367  530956 cri.go:89] found id: ""
	I1212 00:37:49.785382  530956 logs.go:282] 0 containers: []
	W1212 00:37:49.785391  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:37:49.785396  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:37:49.785466  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:37:49.809132  530956 cri.go:89] found id: ""
	I1212 00:37:49.809145  530956 logs.go:282] 0 containers: []
	W1212 00:37:49.809152  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:37:49.809160  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:37:49.809171  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:37:49.874272  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:37:49.874291  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:37:49.888851  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:37:49.888866  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:37:49.954139  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:37:49.945852   12539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:49.946386   12539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:49.948180   12539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:49.948551   12539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:49.950144   12539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:37:49.945852   12539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:49.946386   12539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:49.948180   12539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:49.948551   12539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:49.950144   12539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:37:49.954152  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:37:49.954164  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:37:50.021343  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:37:50.021364  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:37:52.550858  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:52.560788  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:37:52.560857  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:37:52.589542  530956 cri.go:89] found id: ""
	I1212 00:37:52.589556  530956 logs.go:282] 0 containers: []
	W1212 00:37:52.589563  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:37:52.589568  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:37:52.589629  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:37:52.613111  530956 cri.go:89] found id: ""
	I1212 00:37:52.613124  530956 logs.go:282] 0 containers: []
	W1212 00:37:52.613131  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:37:52.613136  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:37:52.613195  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:37:52.637059  530956 cri.go:89] found id: ""
	I1212 00:37:52.637072  530956 logs.go:282] 0 containers: []
	W1212 00:37:52.637079  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:37:52.637084  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:37:52.637142  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:37:52.661402  530956 cri.go:89] found id: ""
	I1212 00:37:52.661415  530956 logs.go:282] 0 containers: []
	W1212 00:37:52.661422  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:37:52.661428  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:37:52.661485  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:37:52.686208  530956 cri.go:89] found id: ""
	I1212 00:37:52.686221  530956 logs.go:282] 0 containers: []
	W1212 00:37:52.686228  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:37:52.686234  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:37:52.686292  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:37:52.714239  530956 cri.go:89] found id: ""
	I1212 00:37:52.714257  530956 logs.go:282] 0 containers: []
	W1212 00:37:52.714272  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:37:52.714281  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:37:52.714360  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:37:52.738849  530956 cri.go:89] found id: ""
	I1212 00:37:52.738862  530956 logs.go:282] 0 containers: []
	W1212 00:37:52.738871  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:37:52.738878  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:37:52.738889  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:37:52.805309  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:37:52.796653   12638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:52.797158   12638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:52.798767   12638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:52.799405   12638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:52.800977   12638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:37:52.796653   12638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:52.797158   12638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:52.798767   12638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:52.799405   12638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:52.800977   12638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:37:52.805318  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:37:52.805329  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:37:52.873118  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:37:52.873138  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:37:52.901072  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:37:52.901088  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:37:52.967085  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:37:52.967104  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:37:55.482800  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:55.493703  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:37:55.493761  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:37:55.527575  530956 cri.go:89] found id: ""
	I1212 00:37:55.527588  530956 logs.go:282] 0 containers: []
	W1212 00:37:55.527595  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:37:55.527601  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:37:55.527663  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:37:55.552177  530956 cri.go:89] found id: ""
	I1212 00:37:55.552191  530956 logs.go:282] 0 containers: []
	W1212 00:37:55.552198  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:37:55.552203  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:37:55.552264  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:37:55.576968  530956 cri.go:89] found id: ""
	I1212 00:37:55.576981  530956 logs.go:282] 0 containers: []
	W1212 00:37:55.576988  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:37:55.576993  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:37:55.577054  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:37:55.603212  530956 cri.go:89] found id: ""
	I1212 00:37:55.603225  530956 logs.go:282] 0 containers: []
	W1212 00:37:55.603232  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:37:55.603237  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:37:55.603300  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:37:55.629922  530956 cri.go:89] found id: ""
	I1212 00:37:55.629936  530956 logs.go:282] 0 containers: []
	W1212 00:37:55.629943  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:37:55.629949  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:37:55.630009  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:37:55.659450  530956 cri.go:89] found id: ""
	I1212 00:37:55.659469  530956 logs.go:282] 0 containers: []
	W1212 00:37:55.659476  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:37:55.659482  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:37:55.659540  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:37:55.683953  530956 cri.go:89] found id: ""
	I1212 00:37:55.683967  530956 logs.go:282] 0 containers: []
	W1212 00:37:55.683974  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:37:55.683981  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:37:55.683991  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:37:55.752000  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:37:55.752019  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:37:55.781847  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:37:55.781863  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:37:55.846599  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:37:55.846617  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:37:55.861470  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:37:55.861487  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:37:55.927422  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:37:55.918837   12763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:55.919637   12763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:55.921344   12763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:55.921624   12763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:55.923175   12763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:37:55.918837   12763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:55.919637   12763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:55.921344   12763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:55.921624   12763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:55.923175   12763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:37:58.429107  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:58.438890  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:37:58.438951  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:37:58.463332  530956 cri.go:89] found id: ""
	I1212 00:37:58.463346  530956 logs.go:282] 0 containers: []
	W1212 00:37:58.463353  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:37:58.463358  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:37:58.463420  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:37:58.502844  530956 cri.go:89] found id: ""
	I1212 00:37:58.502859  530956 logs.go:282] 0 containers: []
	W1212 00:37:58.502866  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:37:58.502871  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:37:58.502934  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:37:58.535191  530956 cri.go:89] found id: ""
	I1212 00:37:58.535204  530956 logs.go:282] 0 containers: []
	W1212 00:37:58.535211  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:37:58.535216  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:37:58.535275  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:37:58.560276  530956 cri.go:89] found id: ""
	I1212 00:37:58.560290  530956 logs.go:282] 0 containers: []
	W1212 00:37:58.560296  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:37:58.560302  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:37:58.560360  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:37:58.585008  530956 cri.go:89] found id: ""
	I1212 00:37:58.585022  530956 logs.go:282] 0 containers: []
	W1212 00:37:58.585029  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:37:58.585034  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:37:58.585092  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:37:58.610668  530956 cri.go:89] found id: ""
	I1212 00:37:58.610704  530956 logs.go:282] 0 containers: []
	W1212 00:37:58.610712  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:37:58.610717  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:37:58.610791  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:37:58.633946  530956 cri.go:89] found id: ""
	I1212 00:37:58.633960  530956 logs.go:282] 0 containers: []
	W1212 00:37:58.633967  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:37:58.633974  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:37:58.633984  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:37:58.702859  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:37:58.702878  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:37:58.730459  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:37:58.730475  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:37:58.799001  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:37:58.799020  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:37:58.813707  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:37:58.813724  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:37:58.880292  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:37:58.871863   12866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:58.872482   12866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:58.874082   12866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:58.874638   12866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:58.876520   12866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:37:58.871863   12866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:58.872482   12866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:58.874082   12866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:58.874638   12866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:58.876520   12866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:38:01.380529  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:38:01.390377  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:38:01.390440  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:38:01.414742  530956 cri.go:89] found id: ""
	I1212 00:38:01.414755  530956 logs.go:282] 0 containers: []
	W1212 00:38:01.414763  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:38:01.414769  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:38:01.414848  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:38:01.440014  530956 cri.go:89] found id: ""
	I1212 00:38:01.440028  530956 logs.go:282] 0 containers: []
	W1212 00:38:01.440035  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:38:01.440040  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:38:01.440100  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:38:01.469919  530956 cri.go:89] found id: ""
	I1212 00:38:01.469947  530956 logs.go:282] 0 containers: []
	W1212 00:38:01.469955  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:38:01.469963  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:38:01.470025  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:38:01.502102  530956 cri.go:89] found id: ""
	I1212 00:38:01.502116  530956 logs.go:282] 0 containers: []
	W1212 00:38:01.502123  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:38:01.502128  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:38:01.502185  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:38:01.550477  530956 cri.go:89] found id: ""
	I1212 00:38:01.550497  530956 logs.go:282] 0 containers: []
	W1212 00:38:01.550504  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:38:01.550509  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:38:01.550572  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:38:01.575848  530956 cri.go:89] found id: ""
	I1212 00:38:01.575861  530956 logs.go:282] 0 containers: []
	W1212 00:38:01.575868  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:38:01.575874  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:38:01.575933  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:38:01.601329  530956 cri.go:89] found id: ""
	I1212 00:38:01.601342  530956 logs.go:282] 0 containers: []
	W1212 00:38:01.601350  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:38:01.601358  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:38:01.601369  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:38:01.617336  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:38:01.617351  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:38:01.681650  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:38:01.672976   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:01.673795   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:01.675461   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:01.675793   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:01.677308   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:38:01.672976   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:01.673795   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:01.675461   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:01.675793   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:01.677308   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:38:01.681659  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:38:01.681669  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:38:01.753959  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:38:01.753987  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:38:01.784884  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:38:01.784901  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:38:04.352224  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:38:04.362582  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:38:04.362651  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:38:04.387422  530956 cri.go:89] found id: ""
	I1212 00:38:04.387436  530956 logs.go:282] 0 containers: []
	W1212 00:38:04.387443  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:38:04.387448  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:38:04.387515  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:38:04.416278  530956 cri.go:89] found id: ""
	I1212 00:38:04.416292  530956 logs.go:282] 0 containers: []
	W1212 00:38:04.416298  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:38:04.416304  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:38:04.416360  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:38:04.445370  530956 cri.go:89] found id: ""
	I1212 00:38:04.445384  530956 logs.go:282] 0 containers: []
	W1212 00:38:04.445391  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:38:04.445397  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:38:04.445455  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:38:04.482755  530956 cri.go:89] found id: ""
	I1212 00:38:04.482768  530956 logs.go:282] 0 containers: []
	W1212 00:38:04.482783  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:38:04.482789  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:38:04.482857  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:38:04.509091  530956 cri.go:89] found id: ""
	I1212 00:38:04.509105  530956 logs.go:282] 0 containers: []
	W1212 00:38:04.509120  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:38:04.509126  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:38:04.509194  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:38:04.539958  530956 cri.go:89] found id: ""
	I1212 00:38:04.539980  530956 logs.go:282] 0 containers: []
	W1212 00:38:04.539987  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:38:04.539995  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:38:04.540053  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:38:04.565072  530956 cri.go:89] found id: ""
	I1212 00:38:04.565085  530956 logs.go:282] 0 containers: []
	W1212 00:38:04.565092  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:38:04.565100  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:38:04.565110  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:38:04.632823  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:38:04.632844  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:38:04.659747  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:38:04.659763  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:38:04.726963  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:38:04.726980  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:38:04.742446  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:38:04.742462  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:38:04.811712  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:38:04.802381   13076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:04.803275   13076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:04.805003   13076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:04.805618   13076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:04.807784   13076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:38:04.802381   13076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:04.803275   13076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:04.805003   13076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:04.805618   13076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:04.807784   13076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:38:07.313373  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:38:07.323395  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:38:07.323461  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:38:07.349092  530956 cri.go:89] found id: ""
	I1212 00:38:07.349106  530956 logs.go:282] 0 containers: []
	W1212 00:38:07.349114  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:38:07.349119  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:38:07.349178  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:38:07.374733  530956 cri.go:89] found id: ""
	I1212 00:38:07.374747  530956 logs.go:282] 0 containers: []
	W1212 00:38:07.374754  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:38:07.374759  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:38:07.374826  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:38:07.399425  530956 cri.go:89] found id: ""
	I1212 00:38:07.399439  530956 logs.go:282] 0 containers: []
	W1212 00:38:07.399446  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:38:07.399450  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:38:07.399509  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:38:07.423784  530956 cri.go:89] found id: ""
	I1212 00:38:07.423798  530956 logs.go:282] 0 containers: []
	W1212 00:38:07.423805  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:38:07.423809  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:38:07.423866  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:38:07.449601  530956 cri.go:89] found id: ""
	I1212 00:38:07.449615  530956 logs.go:282] 0 containers: []
	W1212 00:38:07.449622  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:38:07.449627  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:38:07.449687  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:38:07.483778  530956 cri.go:89] found id: ""
	I1212 00:38:07.483793  530956 logs.go:282] 0 containers: []
	W1212 00:38:07.483800  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:38:07.483805  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:38:07.483863  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:38:07.514105  530956 cri.go:89] found id: ""
	I1212 00:38:07.514118  530956 logs.go:282] 0 containers: []
	W1212 00:38:07.514126  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:38:07.514135  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:38:07.514144  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:38:07.584461  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:38:07.584483  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:38:07.599076  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:38:07.599092  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:38:07.662502  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:38:07.654255   13169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:07.655086   13169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:07.656662   13169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:07.656958   13169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:07.658502   13169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:38:07.654255   13169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:07.655086   13169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:07.656662   13169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:07.656958   13169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:07.658502   13169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:38:07.662512  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:38:07.662524  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:38:07.730514  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:38:07.730532  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:38:10.261580  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:38:10.271806  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:38:10.271866  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:38:10.301488  530956 cri.go:89] found id: ""
	I1212 00:38:10.301509  530956 logs.go:282] 0 containers: []
	W1212 00:38:10.301517  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:38:10.301522  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:38:10.301586  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:38:10.328569  530956 cri.go:89] found id: ""
	I1212 00:38:10.328582  530956 logs.go:282] 0 containers: []
	W1212 00:38:10.328589  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:38:10.328594  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:38:10.328651  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:38:10.352390  530956 cri.go:89] found id: ""
	I1212 00:38:10.352404  530956 logs.go:282] 0 containers: []
	W1212 00:38:10.352411  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:38:10.352416  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:38:10.352476  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:38:10.376595  530956 cri.go:89] found id: ""
	I1212 00:38:10.376608  530956 logs.go:282] 0 containers: []
	W1212 00:38:10.376615  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:38:10.376620  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:38:10.376676  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:38:10.401114  530956 cri.go:89] found id: ""
	I1212 00:38:10.401129  530956 logs.go:282] 0 containers: []
	W1212 00:38:10.401136  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:38:10.401141  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:38:10.401202  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:38:10.426633  530956 cri.go:89] found id: ""
	I1212 00:38:10.426647  530956 logs.go:282] 0 containers: []
	W1212 00:38:10.426654  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:38:10.426659  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:38:10.426740  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:38:10.452233  530956 cri.go:89] found id: ""
	I1212 00:38:10.452246  530956 logs.go:282] 0 containers: []
	W1212 00:38:10.452254  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:38:10.452262  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:38:10.452272  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:38:10.521036  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:38:10.521055  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:38:10.535759  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:38:10.535774  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:38:10.601793  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:38:10.593515   13275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:10.594074   13275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:10.595582   13275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:10.596077   13275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:10.597523   13275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:38:10.593515   13275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:10.594074   13275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:10.595582   13275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:10.596077   13275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:10.597523   13275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:38:10.601803  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:38:10.601813  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:38:10.672541  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:38:10.672560  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:38:13.203975  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:38:13.213736  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:38:13.213796  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:38:13.238219  530956 cri.go:89] found id: ""
	I1212 00:38:13.238234  530956 logs.go:282] 0 containers: []
	W1212 00:38:13.238241  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:38:13.238246  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:38:13.238303  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:38:13.262428  530956 cri.go:89] found id: ""
	I1212 00:38:13.262441  530956 logs.go:282] 0 containers: []
	W1212 00:38:13.262449  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:38:13.262454  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:38:13.262518  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:38:13.287118  530956 cri.go:89] found id: ""
	I1212 00:38:13.287132  530956 logs.go:282] 0 containers: []
	W1212 00:38:13.287139  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:38:13.287144  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:38:13.287201  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:38:13.316471  530956 cri.go:89] found id: ""
	I1212 00:38:13.316485  530956 logs.go:282] 0 containers: []
	W1212 00:38:13.316492  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:38:13.316497  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:38:13.316554  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:38:13.340630  530956 cri.go:89] found id: ""
	I1212 00:38:13.340644  530956 logs.go:282] 0 containers: []
	W1212 00:38:13.340651  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:38:13.340656  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:38:13.340719  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:38:13.365167  530956 cri.go:89] found id: ""
	I1212 00:38:13.365180  530956 logs.go:282] 0 containers: []
	W1212 00:38:13.365187  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:38:13.365192  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:38:13.365249  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:38:13.393786  530956 cri.go:89] found id: ""
	I1212 00:38:13.393800  530956 logs.go:282] 0 containers: []
	W1212 00:38:13.393806  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:38:13.393813  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:38:13.393824  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:38:13.460497  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:38:13.460517  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:38:13.484321  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:38:13.484350  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:38:13.564959  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:38:13.556484   13380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:13.557156   13380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:13.558914   13380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:13.559521   13380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:13.561122   13380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:38:13.556484   13380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:13.557156   13380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:13.558914   13380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:13.559521   13380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:13.561122   13380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:38:13.564970  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:38:13.564991  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:38:13.633622  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:38:13.633641  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:38:16.165859  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:38:16.179076  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:38:16.179137  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:38:16.204832  530956 cri.go:89] found id: ""
	I1212 00:38:16.204846  530956 logs.go:282] 0 containers: []
	W1212 00:38:16.204853  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:38:16.204858  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:38:16.204929  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:38:16.230899  530956 cri.go:89] found id: ""
	I1212 00:38:16.230912  530956 logs.go:282] 0 containers: []
	W1212 00:38:16.230920  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:38:16.230924  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:38:16.230985  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:38:16.260492  530956 cri.go:89] found id: ""
	I1212 00:38:16.260505  530956 logs.go:282] 0 containers: []
	W1212 00:38:16.260513  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:38:16.260518  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:38:16.260582  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:38:16.285639  530956 cri.go:89] found id: ""
	I1212 00:38:16.285652  530956 logs.go:282] 0 containers: []
	W1212 00:38:16.285660  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:38:16.285665  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:38:16.285724  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:38:16.311240  530956 cri.go:89] found id: ""
	I1212 00:38:16.311253  530956 logs.go:282] 0 containers: []
	W1212 00:38:16.311261  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:38:16.311266  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:38:16.311331  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:38:16.337039  530956 cri.go:89] found id: ""
	I1212 00:38:16.337053  530956 logs.go:282] 0 containers: []
	W1212 00:38:16.337060  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:38:16.337065  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:38:16.337132  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:38:16.363033  530956 cri.go:89] found id: ""
	I1212 00:38:16.363047  530956 logs.go:282] 0 containers: []
	W1212 00:38:16.363053  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:38:16.363061  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:38:16.363072  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:38:16.393154  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:38:16.393171  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:38:16.460499  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:38:16.460516  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:38:16.475666  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:38:16.475681  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:38:16.550358  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:38:16.542066   13498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:16.542789   13498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:16.544568   13498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:16.545193   13498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:16.546806   13498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:38:16.542066   13498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:16.542789   13498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:16.544568   13498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:16.545193   13498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:16.546806   13498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:38:16.550367  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:38:16.550378  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:38:19.117450  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:38:19.129437  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:38:19.129500  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:38:19.153970  530956 cri.go:89] found id: ""
	I1212 00:38:19.153983  530956 logs.go:282] 0 containers: []
	W1212 00:38:19.153990  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:38:19.153995  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:38:19.154052  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:38:19.179294  530956 cri.go:89] found id: ""
	I1212 00:38:19.179307  530956 logs.go:282] 0 containers: []
	W1212 00:38:19.179314  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:38:19.179319  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:38:19.179381  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:38:19.205071  530956 cri.go:89] found id: ""
	I1212 00:38:19.205091  530956 logs.go:282] 0 containers: []
	W1212 00:38:19.205098  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:38:19.205103  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:38:19.205168  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:38:19.230084  530956 cri.go:89] found id: ""
	I1212 00:38:19.230098  530956 logs.go:282] 0 containers: []
	W1212 00:38:19.230111  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:38:19.230118  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:38:19.230181  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:38:19.255464  530956 cri.go:89] found id: ""
	I1212 00:38:19.255477  530956 logs.go:282] 0 containers: []
	W1212 00:38:19.255485  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:38:19.255490  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:38:19.255549  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:38:19.285389  530956 cri.go:89] found id: ""
	I1212 00:38:19.285402  530956 logs.go:282] 0 containers: []
	W1212 00:38:19.285409  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:38:19.285415  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:38:19.285472  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:38:19.312947  530956 cri.go:89] found id: ""
	I1212 00:38:19.312960  530956 logs.go:282] 0 containers: []
	W1212 00:38:19.312967  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:38:19.312975  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:38:19.312985  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:38:19.350894  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:38:19.350911  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:38:19.417923  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:38:19.417945  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:38:19.432429  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:38:19.432445  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:38:19.505932  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:38:19.498121   13597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:19.498890   13597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:19.500411   13597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:19.500702   13597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:19.502118   13597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:38:19.498121   13597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:19.498890   13597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:19.500411   13597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:19.500702   13597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:19.502118   13597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:38:19.505942  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:38:19.505964  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:38:22.083196  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:38:22.093637  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:38:22.093699  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:38:22.118550  530956 cri.go:89] found id: ""
	I1212 00:38:22.118565  530956 logs.go:282] 0 containers: []
	W1212 00:38:22.118572  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:38:22.118578  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:38:22.118636  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:38:22.145134  530956 cri.go:89] found id: ""
	I1212 00:38:22.145147  530956 logs.go:282] 0 containers: []
	W1212 00:38:22.145155  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:38:22.145159  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:38:22.145217  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:38:22.170293  530956 cri.go:89] found id: ""
	I1212 00:38:22.170306  530956 logs.go:282] 0 containers: []
	W1212 00:38:22.170313  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:38:22.170318  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:38:22.170386  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:38:22.197536  530956 cri.go:89] found id: ""
	I1212 00:38:22.197550  530956 logs.go:282] 0 containers: []
	W1212 00:38:22.197571  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:38:22.197576  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:38:22.197642  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:38:22.222476  530956 cri.go:89] found id: ""
	I1212 00:38:22.222490  530956 logs.go:282] 0 containers: []
	W1212 00:38:22.222497  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:38:22.222502  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:38:22.222560  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:38:22.247759  530956 cri.go:89] found id: ""
	I1212 00:38:22.247779  530956 logs.go:282] 0 containers: []
	W1212 00:38:22.247792  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:38:22.247797  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:38:22.247865  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:38:22.278000  530956 cri.go:89] found id: ""
	I1212 00:38:22.278022  530956 logs.go:282] 0 containers: []
	W1212 00:38:22.278030  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:38:22.278037  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:38:22.278047  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:38:22.306112  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:38:22.306127  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:38:22.377647  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:38:22.377675  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:38:22.394490  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:38:22.394506  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:38:22.462988  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:38:22.454404   13702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:22.455058   13702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:22.456732   13702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:22.457164   13702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:22.458875   13702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:38:22.454404   13702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:22.455058   13702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:22.456732   13702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:22.457164   13702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:22.458875   13702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:38:22.462999  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:38:22.463010  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:38:25.044675  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:38:25.054532  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:38:25.054592  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:38:25.080041  530956 cri.go:89] found id: ""
	I1212 00:38:25.080055  530956 logs.go:282] 0 containers: []
	W1212 00:38:25.080062  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:38:25.080068  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:38:25.080129  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:38:25.105941  530956 cri.go:89] found id: ""
	I1212 00:38:25.105957  530956 logs.go:282] 0 containers: []
	W1212 00:38:25.105965  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:38:25.105971  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:38:25.106038  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:38:25.136063  530956 cri.go:89] found id: ""
	I1212 00:38:25.136078  530956 logs.go:282] 0 containers: []
	W1212 00:38:25.136086  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:38:25.136096  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:38:25.136159  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:38:25.161125  530956 cri.go:89] found id: ""
	I1212 00:38:25.161140  530956 logs.go:282] 0 containers: []
	W1212 00:38:25.161147  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:38:25.161153  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:38:25.161212  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:38:25.187318  530956 cri.go:89] found id: ""
	I1212 00:38:25.187333  530956 logs.go:282] 0 containers: []
	W1212 00:38:25.187340  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:38:25.187345  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:38:25.187407  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:38:25.213505  530956 cri.go:89] found id: ""
	I1212 00:38:25.213519  530956 logs.go:282] 0 containers: []
	W1212 00:38:25.213528  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:38:25.213533  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:38:25.213593  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:38:25.238804  530956 cri.go:89] found id: ""
	I1212 00:38:25.238818  530956 logs.go:282] 0 containers: []
	W1212 00:38:25.238825  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:38:25.238833  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:38:25.238845  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:38:25.253570  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:38:25.253586  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:38:25.319774  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:38:25.310440   13795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:25.311167   13795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:25.312774   13795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:25.313270   13795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:25.315248   13795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:38:25.310440   13795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:25.311167   13795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:25.312774   13795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:25.313270   13795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:25.315248   13795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:38:25.319800  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:38:25.319811  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:38:25.392356  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:38:25.392375  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:38:25.422668  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:38:25.422706  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:38:27.990024  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:38:28.003363  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:38:28.003444  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:38:28.033003  530956 cri.go:89] found id: ""
	I1212 00:38:28.033017  530956 logs.go:282] 0 containers: []
	W1212 00:38:28.033024  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:38:28.033029  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:38:28.033090  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:38:28.059854  530956 cri.go:89] found id: ""
	I1212 00:38:28.059869  530956 logs.go:282] 0 containers: []
	W1212 00:38:28.059876  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:38:28.059881  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:38:28.059946  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:38:28.085318  530956 cri.go:89] found id: ""
	I1212 00:38:28.085332  530956 logs.go:282] 0 containers: []
	W1212 00:38:28.085339  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:38:28.085349  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:38:28.085408  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:38:28.111377  530956 cri.go:89] found id: ""
	I1212 00:38:28.111390  530956 logs.go:282] 0 containers: []
	W1212 00:38:28.111397  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:38:28.111403  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:38:28.111464  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:38:28.140880  530956 cri.go:89] found id: ""
	I1212 00:38:28.140894  530956 logs.go:282] 0 containers: []
	W1212 00:38:28.140910  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:38:28.140915  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:38:28.140985  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:38:28.166928  530956 cri.go:89] found id: ""
	I1212 00:38:28.166943  530956 logs.go:282] 0 containers: []
	W1212 00:38:28.166950  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:38:28.166955  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:38:28.167013  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:38:28.193116  530956 cri.go:89] found id: ""
	I1212 00:38:28.193129  530956 logs.go:282] 0 containers: []
	W1212 00:38:28.193136  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:38:28.193144  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:38:28.193157  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:38:28.207536  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:38:28.207551  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:38:28.273869  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:38:28.265632   13898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:28.266161   13898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:28.267761   13898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:28.268328   13898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:28.269940   13898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:38:28.265632   13898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:28.266161   13898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:28.267761   13898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:28.268328   13898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:28.269940   13898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:38:28.273878  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:38:28.273888  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:38:28.341616  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:38:28.341634  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:38:28.370270  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:38:28.370286  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:38:30.938812  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:38:30.948944  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:38:30.949000  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:38:30.977305  530956 cri.go:89] found id: ""
	I1212 00:38:30.977320  530956 logs.go:282] 0 containers: []
	W1212 00:38:30.977327  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:38:30.977333  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:38:30.977393  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:38:31.004773  530956 cri.go:89] found id: ""
	I1212 00:38:31.004793  530956 logs.go:282] 0 containers: []
	W1212 00:38:31.004802  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:38:31.004807  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:38:31.004878  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:38:31.034217  530956 cri.go:89] found id: ""
	I1212 00:38:31.034231  530956 logs.go:282] 0 containers: []
	W1212 00:38:31.034238  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:38:31.034243  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:38:31.034299  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:38:31.059299  530956 cri.go:89] found id: ""
	I1212 00:38:31.059313  530956 logs.go:282] 0 containers: []
	W1212 00:38:31.059320  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:38:31.059325  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:38:31.059389  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:38:31.085777  530956 cri.go:89] found id: ""
	I1212 00:38:31.085794  530956 logs.go:282] 0 containers: []
	W1212 00:38:31.085801  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:38:31.085806  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:38:31.085870  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:38:31.113432  530956 cri.go:89] found id: ""
	I1212 00:38:31.113445  530956 logs.go:282] 0 containers: []
	W1212 00:38:31.113453  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:38:31.113458  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:38:31.113517  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:38:31.140290  530956 cri.go:89] found id: ""
	I1212 00:38:31.140303  530956 logs.go:282] 0 containers: []
	W1212 00:38:31.140310  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:38:31.140318  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:38:31.140329  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:38:31.170079  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:38:31.170095  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:38:31.237344  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:38:31.237366  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:38:31.252705  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:38:31.252722  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:38:31.314201  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:38:31.305835   14019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:31.306554   14019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:31.308300   14019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:31.308814   14019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:31.310412   14019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:38:31.305835   14019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:31.306554   14019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:31.308300   14019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:31.308814   14019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:31.310412   14019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:38:31.314211  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:38:31.314222  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:38:33.887992  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:38:33.897911  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:38:33.897978  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:38:33.922473  530956 cri.go:89] found id: ""
	I1212 00:38:33.922487  530956 logs.go:282] 0 containers: []
	W1212 00:38:33.922494  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:38:33.922499  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:38:33.922556  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:38:33.947695  530956 cri.go:89] found id: ""
	I1212 00:38:33.947709  530956 logs.go:282] 0 containers: []
	W1212 00:38:33.947716  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:38:33.947720  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:38:33.947779  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:38:33.975167  530956 cri.go:89] found id: ""
	I1212 00:38:33.975181  530956 logs.go:282] 0 containers: []
	W1212 00:38:33.975188  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:38:33.975194  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:38:33.975256  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:38:33.999707  530956 cri.go:89] found id: ""
	I1212 00:38:33.999722  530956 logs.go:282] 0 containers: []
	W1212 00:38:33.999731  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:38:33.999736  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:38:33.999806  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:38:34.028202  530956 cri.go:89] found id: ""
	I1212 00:38:34.028216  530956 logs.go:282] 0 containers: []
	W1212 00:38:34.028224  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:38:34.028229  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:38:34.028289  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:38:34.053144  530956 cri.go:89] found id: ""
	I1212 00:38:34.053158  530956 logs.go:282] 0 containers: []
	W1212 00:38:34.053169  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:38:34.053175  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:38:34.053239  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:38:34.080035  530956 cri.go:89] found id: ""
	I1212 00:38:34.080050  530956 logs.go:282] 0 containers: []
	W1212 00:38:34.080058  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:38:34.080066  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:38:34.080076  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:38:34.146175  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:38:34.146192  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:38:34.160652  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:38:34.160668  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:38:34.223173  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:38:34.215210   14114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:34.216058   14114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:34.217521   14114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:34.217956   14114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:34.219415   14114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:38:34.215210   14114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:34.216058   14114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:34.217521   14114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:34.217956   14114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:34.219415   14114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:38:34.223184  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:38:34.223194  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:38:34.292571  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:38:34.292590  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:38:36.820393  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:38:36.830345  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:38:36.830406  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:38:36.854187  530956 cri.go:89] found id: ""
	I1212 00:38:36.854201  530956 logs.go:282] 0 containers: []
	W1212 00:38:36.854208  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:38:36.854213  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:38:36.854268  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:38:36.882747  530956 cri.go:89] found id: ""
	I1212 00:38:36.882767  530956 logs.go:282] 0 containers: []
	W1212 00:38:36.882774  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:38:36.882779  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:38:36.882836  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:38:36.909295  530956 cri.go:89] found id: ""
	I1212 00:38:36.909310  530956 logs.go:282] 0 containers: []
	W1212 00:38:36.909317  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:38:36.909321  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:38:36.909380  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:38:36.939718  530956 cri.go:89] found id: ""
	I1212 00:38:36.939732  530956 logs.go:282] 0 containers: []
	W1212 00:38:36.939739  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:38:36.939745  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:38:36.939805  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:38:36.985049  530956 cri.go:89] found id: ""
	I1212 00:38:36.985063  530956 logs.go:282] 0 containers: []
	W1212 00:38:36.985070  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:38:36.985075  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:38:36.985135  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:38:37.018069  530956 cri.go:89] found id: ""
	I1212 00:38:37.018092  530956 logs.go:282] 0 containers: []
	W1212 00:38:37.018101  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:38:37.018107  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:38:37.018197  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:38:37.045321  530956 cri.go:89] found id: ""
	I1212 00:38:37.045335  530956 logs.go:282] 0 containers: []
	W1212 00:38:37.045342  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:38:37.045349  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:38:37.045366  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:38:37.110695  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:38:37.110716  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:38:37.125484  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:38:37.125500  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:38:37.191768  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:38:37.183160   14219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:37.184307   14219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:37.184933   14219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:37.186530   14219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:37.186898   14219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:38:37.183160   14219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:37.184307   14219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:37.184933   14219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:37.186530   14219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:37.186898   14219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:38:37.191778  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:38:37.191789  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:38:37.258979  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:38:37.258998  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:38:39.789133  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:38:39.799919  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:38:39.799985  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:38:39.825459  530956 cri.go:89] found id: ""
	I1212 00:38:39.825473  530956 logs.go:282] 0 containers: []
	W1212 00:38:39.825481  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:38:39.825487  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:38:39.825550  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:38:39.853725  530956 cri.go:89] found id: ""
	I1212 00:38:39.853741  530956 logs.go:282] 0 containers: []
	W1212 00:38:39.853750  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:38:39.853757  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:38:39.853833  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:38:39.879329  530956 cri.go:89] found id: ""
	I1212 00:38:39.879343  530956 logs.go:282] 0 containers: []
	W1212 00:38:39.879350  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:38:39.879355  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:38:39.879417  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:38:39.910098  530956 cri.go:89] found id: ""
	I1212 00:38:39.910111  530956 logs.go:282] 0 containers: []
	W1212 00:38:39.910118  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:38:39.910124  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:38:39.910184  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:38:39.940693  530956 cri.go:89] found id: ""
	I1212 00:38:39.940707  530956 logs.go:282] 0 containers: []
	W1212 00:38:39.940714  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:38:39.940719  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:38:39.940779  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:38:39.967072  530956 cri.go:89] found id: ""
	I1212 00:38:39.967085  530956 logs.go:282] 0 containers: []
	W1212 00:38:39.967093  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:38:39.967099  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:38:39.967165  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:38:39.992659  530956 cri.go:89] found id: ""
	I1212 00:38:39.992672  530956 logs.go:282] 0 containers: []
	W1212 00:38:39.992680  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:38:39.992687  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:38:39.992697  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:38:40.113165  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:38:40.113185  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:38:40.130134  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:38:40.130150  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:38:40.200442  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:38:40.191596   14326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:40.192392   14326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:40.194267   14326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:40.194628   14326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:40.196363   14326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:38:40.191596   14326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:40.192392   14326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:40.194267   14326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:40.194628   14326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:40.196363   14326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:38:40.200453  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:38:40.200463  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:38:40.271707  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:38:40.271728  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:38:42.801953  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:38:42.811892  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:38:42.811958  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:38:42.841306  530956 cri.go:89] found id: ""
	I1212 00:38:42.841320  530956 logs.go:282] 0 containers: []
	W1212 00:38:42.841328  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:38:42.841334  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:38:42.841395  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:38:42.869294  530956 cri.go:89] found id: ""
	I1212 00:38:42.869308  530956 logs.go:282] 0 containers: []
	W1212 00:38:42.869314  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:38:42.869319  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:38:42.869381  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:38:42.898367  530956 cri.go:89] found id: ""
	I1212 00:38:42.898381  530956 logs.go:282] 0 containers: []
	W1212 00:38:42.898388  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:38:42.898393  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:38:42.898454  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:38:42.925039  530956 cri.go:89] found id: ""
	I1212 00:38:42.925052  530956 logs.go:282] 0 containers: []
	W1212 00:38:42.925059  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:38:42.925065  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:38:42.925125  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:38:42.955313  530956 cri.go:89] found id: ""
	I1212 00:38:42.955327  530956 logs.go:282] 0 containers: []
	W1212 00:38:42.955334  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:38:42.955339  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:38:42.955404  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:38:42.979722  530956 cri.go:89] found id: ""
	I1212 00:38:42.979735  530956 logs.go:282] 0 containers: []
	W1212 00:38:42.979742  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:38:42.979747  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:38:42.979808  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:38:43.027955  530956 cri.go:89] found id: ""
	I1212 00:38:43.027969  530956 logs.go:282] 0 containers: []
	W1212 00:38:43.027976  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:38:43.027983  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:38:43.027996  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:38:43.043222  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:38:43.043240  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:38:43.111269  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:38:43.102010   14428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:43.103597   14428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:43.104461   14428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:43.105967   14428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:43.106269   14428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:38:43.102010   14428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:43.103597   14428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:43.104461   14428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:43.105967   14428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:43.106269   14428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:38:43.111321  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:38:43.111331  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:38:43.177977  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:38:43.177997  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:38:43.206880  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:38:43.206895  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:38:45.775312  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:38:45.785672  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:38:45.785736  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:38:45.811375  530956 cri.go:89] found id: ""
	I1212 00:38:45.811389  530956 logs.go:282] 0 containers: []
	W1212 00:38:45.811396  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:38:45.811400  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:38:45.811459  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:38:45.836941  530956 cri.go:89] found id: ""
	I1212 00:38:45.836956  530956 logs.go:282] 0 containers: []
	W1212 00:38:45.836963  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:38:45.836968  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:38:45.837031  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:38:45.863375  530956 cri.go:89] found id: ""
	I1212 00:38:45.863389  530956 logs.go:282] 0 containers: []
	W1212 00:38:45.863396  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:38:45.863402  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:38:45.863461  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:38:45.888628  530956 cri.go:89] found id: ""
	I1212 00:38:45.888641  530956 logs.go:282] 0 containers: []
	W1212 00:38:45.888648  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:38:45.888654  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:38:45.888712  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:38:45.917199  530956 cri.go:89] found id: ""
	I1212 00:38:45.917213  530956 logs.go:282] 0 containers: []
	W1212 00:38:45.917221  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:38:45.917226  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:38:45.917289  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:38:45.944008  530956 cri.go:89] found id: ""
	I1212 00:38:45.944022  530956 logs.go:282] 0 containers: []
	W1212 00:38:45.944029  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:38:45.944034  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:38:45.944093  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:38:45.968971  530956 cri.go:89] found id: ""
	I1212 00:38:45.968984  530956 logs.go:282] 0 containers: []
	W1212 00:38:45.968992  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:38:45.969000  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:38:45.969010  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:38:46.034356  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:38:46.034375  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:38:46.048756  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:38:46.048771  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:38:46.115073  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:38:46.106286   14535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:46.107154   14535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:46.108746   14535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:46.109165   14535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:46.110638   14535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:38:46.106286   14535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:46.107154   14535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:46.108746   14535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:46.109165   14535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:46.110638   14535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:38:46.115096  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:38:46.115107  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:38:46.182387  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:38:46.182407  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:38:48.712482  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:38:48.722635  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:38:48.722715  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:38:48.752202  530956 cri.go:89] found id: ""
	I1212 00:38:48.752215  530956 logs.go:282] 0 containers: []
	W1212 00:38:48.752222  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:38:48.752227  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:38:48.752287  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:38:48.779084  530956 cri.go:89] found id: ""
	I1212 00:38:48.779097  530956 logs.go:282] 0 containers: []
	W1212 00:38:48.779105  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:38:48.779110  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:38:48.779165  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:38:48.803352  530956 cri.go:89] found id: ""
	I1212 00:38:48.803366  530956 logs.go:282] 0 containers: []
	W1212 00:38:48.803375  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:38:48.803380  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:38:48.803441  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:38:48.829635  530956 cri.go:89] found id: ""
	I1212 00:38:48.829649  530956 logs.go:282] 0 containers: []
	W1212 00:38:48.829656  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:38:48.829661  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:38:48.829720  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:38:48.854311  530956 cri.go:89] found id: ""
	I1212 00:38:48.854324  530956 logs.go:282] 0 containers: []
	W1212 00:38:48.854332  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:38:48.854337  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:38:48.854394  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:38:48.879369  530956 cri.go:89] found id: ""
	I1212 00:38:48.879383  530956 logs.go:282] 0 containers: []
	W1212 00:38:48.879390  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:38:48.879396  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:38:48.879456  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:38:48.908110  530956 cri.go:89] found id: ""
	I1212 00:38:48.908124  530956 logs.go:282] 0 containers: []
	W1212 00:38:48.908131  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:38:48.908138  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:38:48.908151  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:38:48.972035  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:38:48.972053  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:38:48.986646  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:38:48.986668  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:38:49.053589  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:38:49.045696   14637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:49.046251   14637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:49.047754   14637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:49.048291   14637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:49.049804   14637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:38:49.045696   14637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:49.046251   14637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:49.047754   14637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:49.048291   14637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:49.049804   14637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:38:49.053599  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:38:49.053608  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:38:49.123212  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:38:49.123236  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:38:51.651584  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:38:51.662032  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:38:51.662096  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:38:51.687559  530956 cri.go:89] found id: ""
	I1212 00:38:51.687573  530956 logs.go:282] 0 containers: []
	W1212 00:38:51.687580  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:38:51.687586  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:38:51.687655  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:38:51.713801  530956 cri.go:89] found id: ""
	I1212 00:38:51.713828  530956 logs.go:282] 0 containers: []
	W1212 00:38:51.713835  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:38:51.713840  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:38:51.713903  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:38:51.748993  530956 cri.go:89] found id: ""
	I1212 00:38:51.749006  530956 logs.go:282] 0 containers: []
	W1212 00:38:51.749028  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:38:51.749034  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:38:51.749091  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:38:51.777108  530956 cri.go:89] found id: ""
	I1212 00:38:51.777122  530956 logs.go:282] 0 containers: []
	W1212 00:38:51.777129  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:38:51.777135  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:38:51.777200  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:38:51.805174  530956 cri.go:89] found id: ""
	I1212 00:38:51.805188  530956 logs.go:282] 0 containers: []
	W1212 00:38:51.805195  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:38:51.805201  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:38:51.805266  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:38:51.830660  530956 cri.go:89] found id: ""
	I1212 00:38:51.830674  530956 logs.go:282] 0 containers: []
	W1212 00:38:51.830701  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:38:51.830706  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:38:51.830778  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:38:51.855989  530956 cri.go:89] found id: ""
	I1212 00:38:51.856003  530956 logs.go:282] 0 containers: []
	W1212 00:38:51.856017  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:38:51.856024  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:38:51.856035  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:38:51.887241  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:38:51.887257  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:38:51.953055  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:38:51.953075  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:38:51.969638  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:38:51.969660  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:38:52.045683  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:38:52.037116   14756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:52.037541   14756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:52.039334   14756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:52.039776   14756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:52.041452   14756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:38:52.037116   14756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:52.037541   14756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:52.039334   14756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:52.039776   14756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:52.041452   14756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:38:52.045694  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:38:52.045705  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:38:54.617323  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:38:54.627443  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:38:54.627502  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:38:54.651505  530956 cri.go:89] found id: ""
	I1212 00:38:54.651519  530956 logs.go:282] 0 containers: []
	W1212 00:38:54.651526  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:38:54.651532  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:38:54.651589  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:38:54.675935  530956 cri.go:89] found id: ""
	I1212 00:38:54.675961  530956 logs.go:282] 0 containers: []
	W1212 00:38:54.675968  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:38:54.675973  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:38:54.676042  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:38:54.701954  530956 cri.go:89] found id: ""
	I1212 00:38:54.701970  530956 logs.go:282] 0 containers: []
	W1212 00:38:54.701979  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:38:54.701986  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:38:54.702056  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:38:54.733636  530956 cri.go:89] found id: ""
	I1212 00:38:54.733657  530956 logs.go:282] 0 containers: []
	W1212 00:38:54.733666  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:38:54.733671  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:38:54.733742  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:38:54.761858  530956 cri.go:89] found id: ""
	I1212 00:38:54.761885  530956 logs.go:282] 0 containers: []
	W1212 00:38:54.761892  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:38:54.761897  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:38:54.761965  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:38:54.798397  530956 cri.go:89] found id: ""
	I1212 00:38:54.798411  530956 logs.go:282] 0 containers: []
	W1212 00:38:54.798431  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:38:54.798436  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:38:54.798502  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:38:54.823810  530956 cri.go:89] found id: ""
	I1212 00:38:54.823824  530956 logs.go:282] 0 containers: []
	W1212 00:38:54.823831  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:38:54.823840  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:38:54.823850  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:38:54.891230  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:38:54.891249  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:38:54.907075  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:38:54.907092  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:38:54.979081  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:38:54.970178   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:54.971009   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:54.971760   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:54.973599   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:54.973900   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:38:54.970178   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:54.971009   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:54.971760   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:54.973599   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:54.973900   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:38:54.979091  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:38:54.979103  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:38:55.048465  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:38:55.048486  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:38:57.579400  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:38:57.590372  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:38:57.590435  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:38:57.618082  530956 cri.go:89] found id: ""
	I1212 00:38:57.618096  530956 logs.go:282] 0 containers: []
	W1212 00:38:57.618103  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:38:57.618108  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:38:57.618169  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:38:57.644801  530956 cri.go:89] found id: ""
	I1212 00:38:57.644815  530956 logs.go:282] 0 containers: []
	W1212 00:38:57.644822  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:38:57.644827  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:38:57.644886  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:38:57.670018  530956 cri.go:89] found id: ""
	I1212 00:38:57.670032  530956 logs.go:282] 0 containers: []
	W1212 00:38:57.670045  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:38:57.670050  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:38:57.670111  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:38:57.695026  530956 cri.go:89] found id: ""
	I1212 00:38:57.695040  530956 logs.go:282] 0 containers: []
	W1212 00:38:57.695047  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:38:57.695052  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:38:57.695116  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:38:57.726077  530956 cri.go:89] found id: ""
	I1212 00:38:57.726091  530956 logs.go:282] 0 containers: []
	W1212 00:38:57.726098  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:38:57.726103  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:38:57.726182  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:38:57.766280  530956 cri.go:89] found id: ""
	I1212 00:38:57.766295  530956 logs.go:282] 0 containers: []
	W1212 00:38:57.766302  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:38:57.766308  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:38:57.766366  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:38:57.794888  530956 cri.go:89] found id: ""
	I1212 00:38:57.794902  530956 logs.go:282] 0 containers: []
	W1212 00:38:57.794909  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:38:57.794917  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:38:57.794931  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:38:57.861092  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:38:57.861111  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:38:57.876214  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:38:57.876230  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:38:57.943746  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:38:57.933552   14955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:57.934412   14955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:57.936560   14955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:57.937552   14955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:57.938297   14955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:38:57.933552   14955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:57.934412   14955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:57.936560   14955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:57.937552   14955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:38:57.938297   14955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:38:57.943757  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:38:57.943767  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:38:58.013702  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:38:58.013722  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:39:00.543612  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:39:00.553735  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:39:00.553795  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:39:00.580386  530956 cri.go:89] found id: ""
	I1212 00:39:00.580400  530956 logs.go:282] 0 containers: []
	W1212 00:39:00.580407  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:39:00.580412  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:39:00.580471  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:39:00.608511  530956 cri.go:89] found id: ""
	I1212 00:39:00.608525  530956 logs.go:282] 0 containers: []
	W1212 00:39:00.608532  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:39:00.608537  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:39:00.608594  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:39:00.633613  530956 cri.go:89] found id: ""
	I1212 00:39:00.633627  530956 logs.go:282] 0 containers: []
	W1212 00:39:00.633634  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:39:00.633639  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:39:00.633696  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:39:00.658755  530956 cri.go:89] found id: ""
	I1212 00:39:00.658769  530956 logs.go:282] 0 containers: []
	W1212 00:39:00.658776  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:39:00.658782  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:39:00.658845  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:39:00.688160  530956 cri.go:89] found id: ""
	I1212 00:39:00.688174  530956 logs.go:282] 0 containers: []
	W1212 00:39:00.688181  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:39:00.688187  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:39:00.688246  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:39:00.714115  530956 cri.go:89] found id: ""
	I1212 00:39:00.714129  530956 logs.go:282] 0 containers: []
	W1212 00:39:00.714136  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:39:00.714142  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:39:00.714203  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:39:00.743594  530956 cri.go:89] found id: ""
	I1212 00:39:00.743607  530956 logs.go:282] 0 containers: []
	W1212 00:39:00.743614  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:39:00.743622  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:39:00.743632  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:39:00.825728  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:39:00.825750  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:39:00.840575  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:39:00.840590  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:39:00.904328  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:39:00.896372   15065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:00.896852   15065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:00.898503   15065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:00.898943   15065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:00.900532   15065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:39:00.896372   15065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:00.896852   15065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:00.898503   15065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:00.898943   15065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:00.900532   15065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:39:00.904339  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:39:00.904350  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:39:00.971157  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:39:00.971177  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:39:03.500568  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:39:03.510753  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:39:03.510824  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:39:03.535333  530956 cri.go:89] found id: ""
	I1212 00:39:03.535347  530956 logs.go:282] 0 containers: []
	W1212 00:39:03.535354  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:39:03.535359  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:39:03.535422  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:39:03.560575  530956 cri.go:89] found id: ""
	I1212 00:39:03.560589  530956 logs.go:282] 0 containers: []
	W1212 00:39:03.560597  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:39:03.560602  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:39:03.560659  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:39:03.589048  530956 cri.go:89] found id: ""
	I1212 00:39:03.589062  530956 logs.go:282] 0 containers: []
	W1212 00:39:03.589069  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:39:03.589075  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:39:03.589131  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:39:03.614812  530956 cri.go:89] found id: ""
	I1212 00:39:03.614826  530956 logs.go:282] 0 containers: []
	W1212 00:39:03.614834  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:39:03.614839  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:39:03.614908  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:39:03.641138  530956 cri.go:89] found id: ""
	I1212 00:39:03.641152  530956 logs.go:282] 0 containers: []
	W1212 00:39:03.641158  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:39:03.641164  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:39:03.641221  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:39:03.669855  530956 cri.go:89] found id: ""
	I1212 00:39:03.669869  530956 logs.go:282] 0 containers: []
	W1212 00:39:03.669876  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:39:03.669884  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:39:03.669943  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:39:03.694625  530956 cri.go:89] found id: ""
	I1212 00:39:03.694650  530956 logs.go:282] 0 containers: []
	W1212 00:39:03.694657  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:39:03.694665  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:39:03.694676  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:39:03.761872  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:39:03.761891  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:39:03.777581  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:39:03.777598  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:39:03.843774  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:39:03.835704   15173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:03.836242   15173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:03.838010   15173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:03.838382   15173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:03.839850   15173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:39:03.835704   15173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:03.836242   15173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:03.838010   15173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:03.838382   15173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:03.839850   15173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:39:03.843783  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:39:03.843793  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:39:03.914951  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:39:03.914977  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:39:06.443917  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:39:06.454370  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:39:06.454434  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:39:06.482109  530956 cri.go:89] found id: ""
	I1212 00:39:06.482123  530956 logs.go:282] 0 containers: []
	W1212 00:39:06.482131  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:39:06.482136  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:39:06.482199  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:39:06.509716  530956 cri.go:89] found id: ""
	I1212 00:39:06.509730  530956 logs.go:282] 0 containers: []
	W1212 00:39:06.509737  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:39:06.509742  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:39:06.509800  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:39:06.537521  530956 cri.go:89] found id: ""
	I1212 00:39:06.537535  530956 logs.go:282] 0 containers: []
	W1212 00:39:06.537542  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:39:06.537548  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:39:06.537606  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:39:06.562757  530956 cri.go:89] found id: ""
	I1212 00:39:06.562770  530956 logs.go:282] 0 containers: []
	W1212 00:39:06.562778  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:39:06.562783  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:39:06.562842  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:39:06.587417  530956 cri.go:89] found id: ""
	I1212 00:39:06.587431  530956 logs.go:282] 0 containers: []
	W1212 00:39:06.587439  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:39:06.587443  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:39:06.587507  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:39:06.612775  530956 cri.go:89] found id: ""
	I1212 00:39:06.612789  530956 logs.go:282] 0 containers: []
	W1212 00:39:06.612797  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:39:06.612804  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:39:06.612864  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:39:06.637360  530956 cri.go:89] found id: ""
	I1212 00:39:06.637374  530956 logs.go:282] 0 containers: []
	W1212 00:39:06.637382  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:39:06.637389  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:39:06.637400  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:39:06.651687  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:39:06.651703  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:39:06.714510  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:39:06.706351   15271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:06.706972   15271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:06.708728   15271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:06.709213   15271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:06.710650   15271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:39:06.706351   15271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:06.706972   15271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:06.708728   15271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:06.709213   15271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:06.710650   15271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:39:06.714521  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:39:06.714531  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:39:06.793242  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:39:06.793263  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:39:06.825153  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:39:06.825170  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:39:09.391589  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:39:09.401762  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:39:09.401823  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:39:09.426113  530956 cri.go:89] found id: ""
	I1212 00:39:09.426127  530956 logs.go:282] 0 containers: []
	W1212 00:39:09.426135  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:39:09.426139  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:39:09.426197  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:39:09.455495  530956 cri.go:89] found id: ""
	I1212 00:39:09.455509  530956 logs.go:282] 0 containers: []
	W1212 00:39:09.455522  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:39:09.455527  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:39:09.455586  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:39:09.484947  530956 cri.go:89] found id: ""
	I1212 00:39:09.484961  530956 logs.go:282] 0 containers: []
	W1212 00:39:09.484969  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:39:09.484975  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:39:09.485038  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:39:09.510850  530956 cri.go:89] found id: ""
	I1212 00:39:09.510865  530956 logs.go:282] 0 containers: []
	W1212 00:39:09.510873  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:39:09.510878  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:39:09.510936  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:39:09.536933  530956 cri.go:89] found id: ""
	I1212 00:39:09.536955  530956 logs.go:282] 0 containers: []
	W1212 00:39:09.536963  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:39:09.536968  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:39:09.537038  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:39:09.565308  530956 cri.go:89] found id: ""
	I1212 00:39:09.565321  530956 logs.go:282] 0 containers: []
	W1212 00:39:09.565328  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:39:09.565333  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:39:09.565391  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:39:09.596694  530956 cri.go:89] found id: ""
	I1212 00:39:09.596708  530956 logs.go:282] 0 containers: []
	W1212 00:39:09.596716  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:39:09.596724  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:39:09.596734  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:39:09.661768  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:39:09.661787  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:39:09.676496  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:39:09.676512  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:39:09.751036  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:39:09.740810   15379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:09.741527   15379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:09.743548   15379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:09.744409   15379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:09.746279   15379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:39:09.740810   15379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:09.741527   15379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:09.743548   15379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:09.744409   15379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:09.746279   15379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:39:09.751057  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:39:09.751069  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:39:09.831885  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:39:09.831905  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:39:12.361885  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:39:12.371912  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:39:12.371972  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:39:12.400852  530956 cri.go:89] found id: ""
	I1212 00:39:12.400867  530956 logs.go:282] 0 containers: []
	W1212 00:39:12.400874  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:39:12.400879  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:39:12.400939  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:39:12.426229  530956 cri.go:89] found id: ""
	I1212 00:39:12.426244  530956 logs.go:282] 0 containers: []
	W1212 00:39:12.426251  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:39:12.426256  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:39:12.426313  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:39:12.455450  530956 cri.go:89] found id: ""
	I1212 00:39:12.455465  530956 logs.go:282] 0 containers: []
	W1212 00:39:12.455472  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:39:12.455477  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:39:12.455542  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:39:12.480339  530956 cri.go:89] found id: ""
	I1212 00:39:12.480353  530956 logs.go:282] 0 containers: []
	W1212 00:39:12.480360  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:39:12.480365  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:39:12.480425  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:39:12.508098  530956 cri.go:89] found id: ""
	I1212 00:39:12.508112  530956 logs.go:282] 0 containers: []
	W1212 00:39:12.508119  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:39:12.508124  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:39:12.508185  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:39:12.534232  530956 cri.go:89] found id: ""
	I1212 00:39:12.534246  530956 logs.go:282] 0 containers: []
	W1212 00:39:12.534253  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:39:12.534259  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:39:12.534318  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:39:12.564030  530956 cri.go:89] found id: ""
	I1212 00:39:12.564045  530956 logs.go:282] 0 containers: []
	W1212 00:39:12.564053  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:39:12.564061  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:39:12.564072  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:39:12.578300  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:39:12.578315  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:39:12.645692  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:39:12.637241   15484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:12.637958   15484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:12.639484   15484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:12.639902   15484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:12.641511   15484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:39:12.637241   15484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:12.637958   15484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:12.639484   15484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:12.639902   15484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:12.641511   15484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:39:12.645702  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:39:12.645714  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:39:12.716817  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:39:12.716835  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:39:12.755607  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:39:12.755622  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:39:15.328461  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:39:15.338656  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:39:15.338747  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:39:15.368754  530956 cri.go:89] found id: ""
	I1212 00:39:15.368768  530956 logs.go:282] 0 containers: []
	W1212 00:39:15.368775  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:39:15.368780  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:39:15.368839  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:39:15.395430  530956 cri.go:89] found id: ""
	I1212 00:39:15.395444  530956 logs.go:282] 0 containers: []
	W1212 00:39:15.395451  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:39:15.395456  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:39:15.395522  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:39:15.420901  530956 cri.go:89] found id: ""
	I1212 00:39:15.420922  530956 logs.go:282] 0 containers: []
	W1212 00:39:15.420930  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:39:15.420935  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:39:15.420996  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:39:15.446341  530956 cri.go:89] found id: ""
	I1212 00:39:15.446355  530956 logs.go:282] 0 containers: []
	W1212 00:39:15.446362  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:39:15.446367  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:39:15.446425  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:39:15.472134  530956 cri.go:89] found id: ""
	I1212 00:39:15.472148  530956 logs.go:282] 0 containers: []
	W1212 00:39:15.472155  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:39:15.472160  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:39:15.472224  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:39:15.499707  530956 cri.go:89] found id: ""
	I1212 00:39:15.499721  530956 logs.go:282] 0 containers: []
	W1212 00:39:15.499729  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:39:15.499734  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:39:15.499803  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:39:15.525097  530956 cri.go:89] found id: ""
	I1212 00:39:15.525111  530956 logs.go:282] 0 containers: []
	W1212 00:39:15.525119  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:39:15.525126  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:39:15.525141  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:39:15.591570  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:39:15.591589  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:39:15.606307  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:39:15.606323  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:39:15.671615  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:39:15.663912   15590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:15.664737   15590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:15.666223   15590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:15.666722   15590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:15.668013   15590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:39:15.663912   15590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:15.664737   15590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:15.666223   15590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:15.666722   15590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:15.668013   15590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:39:15.671625  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:39:15.671640  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:39:15.740633  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:39:15.740680  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:39:18.284352  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:39:18.294497  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:39:18.294570  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:39:18.320150  530956 cri.go:89] found id: ""
	I1212 00:39:18.320164  530956 logs.go:282] 0 containers: []
	W1212 00:39:18.320173  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:39:18.320178  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:39:18.320236  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:39:18.346472  530956 cri.go:89] found id: ""
	I1212 00:39:18.346486  530956 logs.go:282] 0 containers: []
	W1212 00:39:18.346493  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:39:18.346498  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:39:18.346556  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:39:18.377328  530956 cri.go:89] found id: ""
	I1212 00:39:18.377342  530956 logs.go:282] 0 containers: []
	W1212 00:39:18.377349  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:39:18.377354  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:39:18.377411  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:39:18.402792  530956 cri.go:89] found id: ""
	I1212 00:39:18.402813  530956 logs.go:282] 0 containers: []
	W1212 00:39:18.402820  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:39:18.402826  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:39:18.402889  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:39:18.433183  530956 cri.go:89] found id: ""
	I1212 00:39:18.433198  530956 logs.go:282] 0 containers: []
	W1212 00:39:18.433205  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:39:18.433210  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:39:18.433272  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:39:18.458993  530956 cri.go:89] found id: ""
	I1212 00:39:18.459007  530956 logs.go:282] 0 containers: []
	W1212 00:39:18.459015  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:39:18.459020  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:39:18.459082  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:39:18.483237  530956 cri.go:89] found id: ""
	I1212 00:39:18.483251  530956 logs.go:282] 0 containers: []
	W1212 00:39:18.483258  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:39:18.483267  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:39:18.483276  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:39:18.549785  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:39:18.549803  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:39:18.564675  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:39:18.564692  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:39:18.635252  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:39:18.622293   15699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:18.627996   15699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:18.628725   15699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:18.629829   15699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:18.630310   15699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:39:18.622293   15699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:18.627996   15699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:18.628725   15699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:18.629829   15699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:18.630310   15699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:39:18.635261  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:39:18.635271  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:39:18.704032  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:39:18.704054  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:39:21.245504  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:39:21.256336  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:39:21.256398  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:39:21.282849  530956 cri.go:89] found id: ""
	I1212 00:39:21.282863  530956 logs.go:282] 0 containers: []
	W1212 00:39:21.282871  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:39:21.282878  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:39:21.282936  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:39:21.309330  530956 cri.go:89] found id: ""
	I1212 00:39:21.309344  530956 logs.go:282] 0 containers: []
	W1212 00:39:21.309351  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:39:21.309359  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:39:21.309419  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:39:21.338973  530956 cri.go:89] found id: ""
	I1212 00:39:21.338986  530956 logs.go:282] 0 containers: []
	W1212 00:39:21.338994  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:39:21.338999  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:39:21.339064  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:39:21.366261  530956 cri.go:89] found id: ""
	I1212 00:39:21.366275  530956 logs.go:282] 0 containers: []
	W1212 00:39:21.366282  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:39:21.366287  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:39:21.366346  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:39:21.393801  530956 cri.go:89] found id: ""
	I1212 00:39:21.393815  530956 logs.go:282] 0 containers: []
	W1212 00:39:21.393822  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:39:21.393827  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:39:21.393888  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:39:21.418339  530956 cri.go:89] found id: ""
	I1212 00:39:21.418353  530956 logs.go:282] 0 containers: []
	W1212 00:39:21.418360  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:39:21.418365  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:39:21.418425  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:39:21.443336  530956 cri.go:89] found id: ""
	I1212 00:39:21.443350  530956 logs.go:282] 0 containers: []
	W1212 00:39:21.443356  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:39:21.443364  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:39:21.443375  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:39:21.470973  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:39:21.470988  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:39:21.540182  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:39:21.540203  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:39:21.554835  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:39:21.554851  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:39:21.618440  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:39:21.609987   15813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:21.610798   15813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:21.612355   15813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:21.612867   15813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:21.614445   15813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:39:21.609987   15813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:21.610798   15813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:21.612355   15813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:21.612867   15813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:21.614445   15813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:39:21.618450  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:39:21.618460  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:39:24.186363  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:39:24.196446  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:39:24.196514  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:39:24.228176  530956 cri.go:89] found id: ""
	I1212 00:39:24.228189  530956 logs.go:282] 0 containers: []
	W1212 00:39:24.228196  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:39:24.228201  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:39:24.228263  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:39:24.252432  530956 cri.go:89] found id: ""
	I1212 00:39:24.252446  530956 logs.go:282] 0 containers: []
	W1212 00:39:24.252453  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:39:24.252458  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:39:24.252517  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:39:24.277088  530956 cri.go:89] found id: ""
	I1212 00:39:24.277102  530956 logs.go:282] 0 containers: []
	W1212 00:39:24.277109  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:39:24.277113  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:39:24.277172  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:39:24.301976  530956 cri.go:89] found id: ""
	I1212 00:39:24.301989  530956 logs.go:282] 0 containers: []
	W1212 00:39:24.301996  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:39:24.302001  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:39:24.302058  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:39:24.326771  530956 cri.go:89] found id: ""
	I1212 00:39:24.326785  530956 logs.go:282] 0 containers: []
	W1212 00:39:24.326792  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:39:24.326797  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:39:24.326858  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:39:24.352740  530956 cri.go:89] found id: ""
	I1212 00:39:24.352754  530956 logs.go:282] 0 containers: []
	W1212 00:39:24.352761  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:39:24.352766  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:39:24.352825  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:39:24.379469  530956 cri.go:89] found id: ""
	I1212 00:39:24.379483  530956 logs.go:282] 0 containers: []
	W1212 00:39:24.379490  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:39:24.379498  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:39:24.379508  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:39:24.407400  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:39:24.407417  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:39:24.473931  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:39:24.473951  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:39:24.488478  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:39:24.488494  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:39:24.552073  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:39:24.544053   15923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:24.544795   15923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:24.546281   15923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:24.546877   15923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:24.548351   15923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:39:24.544053   15923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:24.544795   15923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:24.546281   15923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:24.546877   15923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:24.548351   15923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:39:24.552083  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:39:24.552093  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:39:27.124323  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:39:27.134160  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:39:27.134218  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:39:27.161224  530956 cri.go:89] found id: ""
	I1212 00:39:27.161239  530956 logs.go:282] 0 containers: []
	W1212 00:39:27.161247  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:39:27.161253  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:39:27.161317  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:39:27.185561  530956 cri.go:89] found id: ""
	I1212 00:39:27.185575  530956 logs.go:282] 0 containers: []
	W1212 00:39:27.185582  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:39:27.185587  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:39:27.185647  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:39:27.212949  530956 cri.go:89] found id: ""
	I1212 00:39:27.212962  530956 logs.go:282] 0 containers: []
	W1212 00:39:27.212969  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:39:27.212974  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:39:27.213035  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:39:27.237907  530956 cri.go:89] found id: ""
	I1212 00:39:27.237921  530956 logs.go:282] 0 containers: []
	W1212 00:39:27.237928  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:39:27.237933  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:39:27.237991  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:39:27.264773  530956 cri.go:89] found id: ""
	I1212 00:39:27.264787  530956 logs.go:282] 0 containers: []
	W1212 00:39:27.264794  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:39:27.264799  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:39:27.264858  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:39:27.290448  530956 cri.go:89] found id: ""
	I1212 00:39:27.290462  530956 logs.go:282] 0 containers: []
	W1212 00:39:27.290469  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:39:27.290474  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:39:27.290531  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:39:27.315823  530956 cri.go:89] found id: ""
	I1212 00:39:27.315837  530956 logs.go:282] 0 containers: []
	W1212 00:39:27.315844  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:39:27.315852  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:39:27.315863  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:39:27.389757  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:39:27.389777  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:39:27.422043  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:39:27.422059  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:39:27.492490  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:39:27.492509  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:39:27.507777  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:39:27.507793  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:39:27.571981  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:39:27.563800   16033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:27.564533   16033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:27.566031   16033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:27.566607   16033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:27.568082   16033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:39:27.563800   16033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:27.564533   16033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:27.566031   16033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:27.566607   16033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:27.568082   16033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:39:30.074632  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:39:30.089373  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:39:30.089465  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:39:30.125906  530956 cri.go:89] found id: ""
	I1212 00:39:30.125923  530956 logs.go:282] 0 containers: []
	W1212 00:39:30.125931  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:39:30.125939  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:39:30.126019  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:39:30.159780  530956 cri.go:89] found id: ""
	I1212 00:39:30.159796  530956 logs.go:282] 0 containers: []
	W1212 00:39:30.159804  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:39:30.159810  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:39:30.159878  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:39:30.186451  530956 cri.go:89] found id: ""
	I1212 00:39:30.186466  530956 logs.go:282] 0 containers: []
	W1212 00:39:30.186473  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:39:30.186478  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:39:30.186541  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:39:30.212831  530956 cri.go:89] found id: ""
	I1212 00:39:30.212846  530956 logs.go:282] 0 containers: []
	W1212 00:39:30.212859  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:39:30.212864  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:39:30.212926  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:39:30.239897  530956 cri.go:89] found id: ""
	I1212 00:39:30.239912  530956 logs.go:282] 0 containers: []
	W1212 00:39:30.239919  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:39:30.239924  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:39:30.239987  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:39:30.265595  530956 cri.go:89] found id: ""
	I1212 00:39:30.265610  530956 logs.go:282] 0 containers: []
	W1212 00:39:30.265618  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:39:30.265623  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:39:30.265684  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:39:30.293057  530956 cri.go:89] found id: ""
	I1212 00:39:30.293072  530956 logs.go:282] 0 containers: []
	W1212 00:39:30.293079  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:39:30.293087  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:39:30.293098  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:39:30.360384  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:39:30.360403  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:39:30.375514  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:39:30.375533  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:39:30.445622  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:39:30.436678   16126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:30.437400   16126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:30.439405   16126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:30.440010   16126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:30.441699   16126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:39:30.436678   16126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:30.437400   16126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:30.439405   16126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:30.440010   16126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:30.441699   16126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:39:30.445632  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:39:30.445642  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:39:30.514984  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:39:30.515002  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:39:33.046486  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:39:33.057328  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:39:33.057389  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:39:33.082571  530956 cri.go:89] found id: ""
	I1212 00:39:33.082584  530956 logs.go:282] 0 containers: []
	W1212 00:39:33.082592  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:39:33.082597  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:39:33.082656  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:39:33.107156  530956 cri.go:89] found id: ""
	I1212 00:39:33.107169  530956 logs.go:282] 0 containers: []
	W1212 00:39:33.107176  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:39:33.107181  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:39:33.107242  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:39:33.132433  530956 cri.go:89] found id: ""
	I1212 00:39:33.132448  530956 logs.go:282] 0 containers: []
	W1212 00:39:33.132456  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:39:33.132460  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:39:33.132524  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:39:33.158141  530956 cri.go:89] found id: ""
	I1212 00:39:33.158155  530956 logs.go:282] 0 containers: []
	W1212 00:39:33.158162  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:39:33.158167  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:39:33.158229  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:39:33.185335  530956 cri.go:89] found id: ""
	I1212 00:39:33.185350  530956 logs.go:282] 0 containers: []
	W1212 00:39:33.185357  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:39:33.185362  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:39:33.185423  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:39:33.214702  530956 cri.go:89] found id: ""
	I1212 00:39:33.214716  530956 logs.go:282] 0 containers: []
	W1212 00:39:33.214731  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:39:33.214738  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:39:33.214798  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:39:33.239415  530956 cri.go:89] found id: ""
	I1212 00:39:33.239429  530956 logs.go:282] 0 containers: []
	W1212 00:39:33.239436  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:39:33.239444  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:39:33.239462  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:39:33.303881  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:39:33.303900  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:39:33.318306  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:39:33.318324  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:39:33.385940  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:39:33.376699   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:33.377337   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:33.379127   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:33.379736   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:33.381336   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:39:33.376699   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:33.377337   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:33.379127   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:33.379736   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:33.381336   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:39:33.385950  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:39:33.385961  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:39:33.453867  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:39:33.453884  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:39:35.983022  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:39:35.993721  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:39:35.993785  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:39:36.032639  530956 cri.go:89] found id: ""
	I1212 00:39:36.032654  530956 logs.go:282] 0 containers: []
	W1212 00:39:36.032662  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:39:36.032667  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:39:36.032737  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:39:36.069795  530956 cri.go:89] found id: ""
	I1212 00:39:36.069810  530956 logs.go:282] 0 containers: []
	W1212 00:39:36.069817  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:39:36.069822  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:39:36.069882  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:39:36.099096  530956 cri.go:89] found id: ""
	I1212 00:39:36.099111  530956 logs.go:282] 0 containers: []
	W1212 00:39:36.099118  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:39:36.099124  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:39:36.099184  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:39:36.128685  530956 cri.go:89] found id: ""
	I1212 00:39:36.128699  530956 logs.go:282] 0 containers: []
	W1212 00:39:36.128706  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:39:36.128711  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:39:36.128772  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:39:36.154641  530956 cri.go:89] found id: ""
	I1212 00:39:36.154654  530956 logs.go:282] 0 containers: []
	W1212 00:39:36.154662  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:39:36.154666  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:39:36.154762  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:39:36.179316  530956 cri.go:89] found id: ""
	I1212 00:39:36.179330  530956 logs.go:282] 0 containers: []
	W1212 00:39:36.179338  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:39:36.179343  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:39:36.179402  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:39:36.205036  530956 cri.go:89] found id: ""
	I1212 00:39:36.205050  530956 logs.go:282] 0 containers: []
	W1212 00:39:36.205057  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:39:36.205066  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:39:36.205079  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:39:36.271067  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:39:36.271086  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:39:36.285990  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:39:36.286006  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:39:36.350986  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:39:36.343284   16330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:36.343819   16330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:36.345314   16330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:36.345743   16330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:36.347205   16330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:39:36.343284   16330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:36.343819   16330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:36.345314   16330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:36.345743   16330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:36.347205   16330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:39:36.350996  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:39:36.351005  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:39:36.418783  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:39:36.418803  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:39:38.948706  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:39:38.958630  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:39:38.958705  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:39:38.988268  530956 cri.go:89] found id: ""
	I1212 00:39:38.988282  530956 logs.go:282] 0 containers: []
	W1212 00:39:38.988289  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:39:38.988294  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:39:38.988372  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:39:39.017066  530956 cri.go:89] found id: ""
	I1212 00:39:39.017088  530956 logs.go:282] 0 containers: []
	W1212 00:39:39.017095  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:39:39.017100  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:39:39.017158  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:39:39.044203  530956 cri.go:89] found id: ""
	I1212 00:39:39.044217  530956 logs.go:282] 0 containers: []
	W1212 00:39:39.044223  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:39:39.044232  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:39:39.044293  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:39:39.073574  530956 cri.go:89] found id: ""
	I1212 00:39:39.073588  530956 logs.go:282] 0 containers: []
	W1212 00:39:39.073595  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:39:39.073600  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:39:39.073658  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:39:39.098254  530956 cri.go:89] found id: ""
	I1212 00:39:39.098267  530956 logs.go:282] 0 containers: []
	W1212 00:39:39.098274  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:39:39.098279  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:39:39.098338  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:39:39.122552  530956 cri.go:89] found id: ""
	I1212 00:39:39.122566  530956 logs.go:282] 0 containers: []
	W1212 00:39:39.122573  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:39:39.122578  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:39:39.122641  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:39:39.149933  530956 cri.go:89] found id: ""
	I1212 00:39:39.149947  530956 logs.go:282] 0 containers: []
	W1212 00:39:39.149954  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:39:39.149961  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:39:39.149972  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:39:39.164970  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:39:39.164986  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:39:39.228249  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:39:39.219299   16431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:39.219833   16431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:39.221740   16431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:39.222278   16431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:39.223944   16431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:39:39.219299   16431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:39.219833   16431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:39.221740   16431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:39.222278   16431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:39.223944   16431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:39:39.228259  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:39:39.228272  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:39:39.295712  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:39:39.295731  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:39:39.326861  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:39:39.326879  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:39:41.894749  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:39:41.904730  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:39:41.904791  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:39:41.929481  530956 cri.go:89] found id: ""
	I1212 00:39:41.929494  530956 logs.go:282] 0 containers: []
	W1212 00:39:41.929501  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:39:41.929506  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:39:41.929564  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:39:41.956371  530956 cri.go:89] found id: ""
	I1212 00:39:41.956385  530956 logs.go:282] 0 containers: []
	W1212 00:39:41.956392  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:39:41.956397  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:39:41.956453  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:39:41.998298  530956 cri.go:89] found id: ""
	I1212 00:39:41.998313  530956 logs.go:282] 0 containers: []
	W1212 00:39:41.998327  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:39:41.998332  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:39:41.998394  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:39:42.039528  530956 cri.go:89] found id: ""
	I1212 00:39:42.039542  530956 logs.go:282] 0 containers: []
	W1212 00:39:42.039549  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:39:42.039554  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:39:42.039617  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:39:42.071895  530956 cri.go:89] found id: ""
	I1212 00:39:42.071909  530956 logs.go:282] 0 containers: []
	W1212 00:39:42.071918  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:39:42.071923  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:39:42.071999  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:39:42.104807  530956 cri.go:89] found id: ""
	I1212 00:39:42.104823  530956 logs.go:282] 0 containers: []
	W1212 00:39:42.104831  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:39:42.104837  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:39:42.104914  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:39:42.139871  530956 cri.go:89] found id: ""
	I1212 00:39:42.139886  530956 logs.go:282] 0 containers: []
	W1212 00:39:42.139894  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:39:42.139903  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:39:42.139917  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:39:42.221872  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:39:42.211579   16534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:42.212551   16534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:42.214428   16534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:42.215573   16534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:42.216630   16534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:39:42.211579   16534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:42.212551   16534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:42.214428   16534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:42.215573   16534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:42.216630   16534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:39:42.221883  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:39:42.221894  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:39:42.294247  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:39:42.294267  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:39:42.327229  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:39:42.327245  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:39:42.396289  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:39:42.396308  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:39:44.911333  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:39:44.921559  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:39:44.921618  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:39:44.947811  530956 cri.go:89] found id: ""
	I1212 00:39:44.947825  530956 logs.go:282] 0 containers: []
	W1212 00:39:44.947832  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:39:44.947837  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:39:44.947898  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:39:44.974488  530956 cri.go:89] found id: ""
	I1212 00:39:44.974502  530956 logs.go:282] 0 containers: []
	W1212 00:39:44.974509  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:39:44.974514  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:39:44.974578  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:39:45.062335  530956 cri.go:89] found id: ""
	I1212 00:39:45.062350  530956 logs.go:282] 0 containers: []
	W1212 00:39:45.062358  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:39:45.062363  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:39:45.062431  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:39:45.115594  530956 cri.go:89] found id: ""
	I1212 00:39:45.115611  530956 logs.go:282] 0 containers: []
	W1212 00:39:45.115621  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:39:45.115627  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:39:45.115695  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:39:45.157432  530956 cri.go:89] found id: ""
	I1212 00:39:45.157449  530956 logs.go:282] 0 containers: []
	W1212 00:39:45.157457  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:39:45.157463  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:39:45.157542  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:39:45.199222  530956 cri.go:89] found id: ""
	I1212 00:39:45.199237  530956 logs.go:282] 0 containers: []
	W1212 00:39:45.199247  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:39:45.199252  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:39:45.199327  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:39:45.277211  530956 cri.go:89] found id: ""
	I1212 00:39:45.277239  530956 logs.go:282] 0 containers: []
	W1212 00:39:45.277248  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:39:45.277256  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:39:45.277272  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:39:45.354665  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:39:45.354742  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:39:45.370015  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:39:45.370032  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:39:45.437294  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:39:45.428349   16647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:45.429025   16647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:45.430763   16647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:45.431362   16647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:45.433211   16647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:39:45.428349   16647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:45.429025   16647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:45.430763   16647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:45.431362   16647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:45.433211   16647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:39:45.437306  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:39:45.437317  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:39:45.506731  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:39:45.506752  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:39:48.035477  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:39:48.045681  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:39:48.045741  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:39:48.076045  530956 cri.go:89] found id: ""
	I1212 00:39:48.076059  530956 logs.go:282] 0 containers: []
	W1212 00:39:48.076066  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:39:48.076072  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:39:48.076135  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:39:48.110061  530956 cri.go:89] found id: ""
	I1212 00:39:48.110074  530956 logs.go:282] 0 containers: []
	W1212 00:39:48.110082  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:39:48.110087  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:39:48.110146  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:39:48.134924  530956 cri.go:89] found id: ""
	I1212 00:39:48.134939  530956 logs.go:282] 0 containers: []
	W1212 00:39:48.134946  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:39:48.134951  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:39:48.135014  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:39:48.160105  530956 cri.go:89] found id: ""
	I1212 00:39:48.160119  530956 logs.go:282] 0 containers: []
	W1212 00:39:48.160126  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:39:48.160131  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:39:48.160199  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:39:48.185148  530956 cri.go:89] found id: ""
	I1212 00:39:48.185162  530956 logs.go:282] 0 containers: []
	W1212 00:39:48.185169  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:39:48.185174  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:39:48.185236  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:39:48.210105  530956 cri.go:89] found id: ""
	I1212 00:39:48.210119  530956 logs.go:282] 0 containers: []
	W1212 00:39:48.210127  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:39:48.210132  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:39:48.210198  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:39:48.234723  530956 cri.go:89] found id: ""
	I1212 00:39:48.234736  530956 logs.go:282] 0 containers: []
	W1212 00:39:48.234743  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:39:48.234752  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:39:48.234762  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:39:48.264606  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:39:48.264624  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:39:48.333093  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:39:48.333111  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:39:48.348065  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:39:48.348080  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:39:48.410868  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:39:48.402563   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:48.403327   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:48.405002   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:48.405468   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:48.407068   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:39:48.402563   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:48.403327   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:48.405002   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:48.405468   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:48.407068   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:39:48.410879  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:39:48.410891  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:39:50.982598  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:39:50.995299  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:39:50.995361  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:39:51.032326  530956 cri.go:89] found id: ""
	I1212 00:39:51.032340  530956 logs.go:282] 0 containers: []
	W1212 00:39:51.032348  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:39:51.032353  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:39:51.032412  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:39:51.060416  530956 cri.go:89] found id: ""
	I1212 00:39:51.060435  530956 logs.go:282] 0 containers: []
	W1212 00:39:51.060444  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:39:51.060448  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:39:51.060525  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:39:51.087755  530956 cri.go:89] found id: ""
	I1212 00:39:51.087769  530956 logs.go:282] 0 containers: []
	W1212 00:39:51.087777  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:39:51.087783  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:39:51.087844  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:39:51.113932  530956 cri.go:89] found id: ""
	I1212 00:39:51.113946  530956 logs.go:282] 0 containers: []
	W1212 00:39:51.113954  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:39:51.113959  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:39:51.114017  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:39:51.141585  530956 cri.go:89] found id: ""
	I1212 00:39:51.141599  530956 logs.go:282] 0 containers: []
	W1212 00:39:51.141607  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:39:51.141612  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:39:51.141678  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:39:51.169491  530956 cri.go:89] found id: ""
	I1212 00:39:51.169506  530956 logs.go:282] 0 containers: []
	W1212 00:39:51.169513  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:39:51.169518  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:39:51.169577  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:39:51.195655  530956 cri.go:89] found id: ""
	I1212 00:39:51.195668  530956 logs.go:282] 0 containers: []
	W1212 00:39:51.195676  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:39:51.195684  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:39:51.195694  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:39:51.264764  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:39:51.264785  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:39:51.291612  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:39:51.291628  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:39:51.359746  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:39:51.359764  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:39:51.374319  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:39:51.374340  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:39:51.437078  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:39:51.428471   16867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:51.429034   16867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:51.430488   16867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:51.431070   16867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:51.432638   16867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:39:51.428471   16867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:51.429034   16867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:51.430488   16867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:51.431070   16867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:51.432638   16867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:39:53.938110  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:39:53.948663  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:39:53.948763  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:39:53.987477  530956 cri.go:89] found id: ""
	I1212 00:39:53.987490  530956 logs.go:282] 0 containers: []
	W1212 00:39:53.987497  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:39:53.987502  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:39:53.987565  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:39:54.026859  530956 cri.go:89] found id: ""
	I1212 00:39:54.026873  530956 logs.go:282] 0 containers: []
	W1212 00:39:54.026881  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:39:54.026897  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:39:54.026958  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:39:54.054638  530956 cri.go:89] found id: ""
	I1212 00:39:54.054652  530956 logs.go:282] 0 containers: []
	W1212 00:39:54.054659  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:39:54.054664  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:39:54.054820  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:39:54.080864  530956 cri.go:89] found id: ""
	I1212 00:39:54.080879  530956 logs.go:282] 0 containers: []
	W1212 00:39:54.080886  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:39:54.080891  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:39:54.080958  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:39:54.106972  530956 cri.go:89] found id: ""
	I1212 00:39:54.106986  530956 logs.go:282] 0 containers: []
	W1212 00:39:54.106993  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:39:54.106998  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:39:54.107056  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:39:54.131665  530956 cri.go:89] found id: ""
	I1212 00:39:54.131678  530956 logs.go:282] 0 containers: []
	W1212 00:39:54.131686  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:39:54.131692  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:39:54.131749  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:39:54.155857  530956 cri.go:89] found id: ""
	I1212 00:39:54.155870  530956 logs.go:282] 0 containers: []
	W1212 00:39:54.155877  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:39:54.155885  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:39:54.155895  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:39:54.225662  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:39:54.216735   16954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:54.217433   16954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:54.219268   16954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:54.219827   16954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:54.221703   16954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:39:54.216735   16954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:54.217433   16954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:54.219268   16954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:54.219827   16954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:54.221703   16954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:39:54.225675  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:39:54.225686  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:39:54.297964  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:39:54.297992  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:39:54.330016  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:39:54.330041  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:39:54.401820  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:39:54.401842  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:39:56.918391  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:39:56.929720  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:39:56.929780  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:39:56.955459  530956 cri.go:89] found id: ""
	I1212 00:39:56.955473  530956 logs.go:282] 0 containers: []
	W1212 00:39:56.955480  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:39:56.955485  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:39:56.955543  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:39:56.987918  530956 cri.go:89] found id: ""
	I1212 00:39:56.987932  530956 logs.go:282] 0 containers: []
	W1212 00:39:56.987939  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:39:56.987944  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:39:56.988002  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:39:57.020006  530956 cri.go:89] found id: ""
	I1212 00:39:57.020020  530956 logs.go:282] 0 containers: []
	W1212 00:39:57.020033  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:39:57.020038  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:39:57.020115  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:39:57.048442  530956 cri.go:89] found id: ""
	I1212 00:39:57.048467  530956 logs.go:282] 0 containers: []
	W1212 00:39:57.048475  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:39:57.048483  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:39:57.048552  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:39:57.074435  530956 cri.go:89] found id: ""
	I1212 00:39:57.074449  530956 logs.go:282] 0 containers: []
	W1212 00:39:57.074456  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:39:57.074461  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:39:57.074521  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:39:57.099293  530956 cri.go:89] found id: ""
	I1212 00:39:57.099307  530956 logs.go:282] 0 containers: []
	W1212 00:39:57.099315  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:39:57.099320  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:39:57.099379  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:39:57.125629  530956 cri.go:89] found id: ""
	I1212 00:39:57.125651  530956 logs.go:282] 0 containers: []
	W1212 00:39:57.125659  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:39:57.125666  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:39:57.125676  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:39:57.155351  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:39:57.155367  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:39:57.220025  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:39:57.220044  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:39:57.234981  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:39:57.235003  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:39:57.300835  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:39:57.292962   17075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:57.293535   17075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:57.295085   17075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:57.295551   17075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:57.297012   17075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:39:57.292962   17075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:57.293535   17075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:57.295085   17075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:57.295551   17075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:39:57.297012   17075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:39:57.300845  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:39:57.300856  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:39:59.869530  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:39:59.882048  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:39:59.882110  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:39:59.907681  530956 cri.go:89] found id: ""
	I1212 00:39:59.907696  530956 logs.go:282] 0 containers: []
	W1212 00:39:59.907703  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:39:59.907708  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:39:59.907775  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:39:59.932480  530956 cri.go:89] found id: ""
	I1212 00:39:59.932494  530956 logs.go:282] 0 containers: []
	W1212 00:39:59.932509  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:39:59.932515  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:39:59.932583  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:39:59.958173  530956 cri.go:89] found id: ""
	I1212 00:39:59.958188  530956 logs.go:282] 0 containers: []
	W1212 00:39:59.958195  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:39:59.958200  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:39:59.958261  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:39:59.990305  530956 cri.go:89] found id: ""
	I1212 00:39:59.990319  530956 logs.go:282] 0 containers: []
	W1212 00:39:59.990326  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:39:59.990331  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:39:59.990390  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:40:00.115674  530956 cri.go:89] found id: ""
	I1212 00:40:00.115690  530956 logs.go:282] 0 containers: []
	W1212 00:40:00.115699  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:40:00.115705  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:40:00.115778  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:40:00.211546  530956 cri.go:89] found id: ""
	I1212 00:40:00.211573  530956 logs.go:282] 0 containers: []
	W1212 00:40:00.211583  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:40:00.211589  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:40:00.211670  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:40:00.306176  530956 cri.go:89] found id: ""
	I1212 00:40:00.306192  530956 logs.go:282] 0 containers: []
	W1212 00:40:00.306200  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:40:00.306208  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:40:00.306220  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:40:00.433331  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:40:00.433360  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:40:00.458175  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:40:00.458193  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:40:00.603203  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:40:00.592976   17170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:40:00.593818   17170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:40:00.596110   17170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:40:00.596864   17170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:40:00.598662   17170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:40:00.592976   17170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:40:00.593818   17170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:40:00.596110   17170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:40:00.596864   17170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:40:00.598662   17170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:40:00.603213  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:40:00.603224  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:40:00.674062  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:40:00.674086  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:40:03.207059  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:40:03.217144  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:40:03.217203  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:40:03.242377  530956 cri.go:89] found id: ""
	I1212 00:40:03.242391  530956 logs.go:282] 0 containers: []
	W1212 00:40:03.242398  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:40:03.242403  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:40:03.242460  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:40:03.268604  530956 cri.go:89] found id: ""
	I1212 00:40:03.268618  530956 logs.go:282] 0 containers: []
	W1212 00:40:03.268625  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:40:03.268630  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:40:03.268691  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:40:03.293354  530956 cri.go:89] found id: ""
	I1212 00:40:03.293367  530956 logs.go:282] 0 containers: []
	W1212 00:40:03.293374  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:40:03.293379  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:40:03.293437  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:40:03.323082  530956 cri.go:89] found id: ""
	I1212 00:40:03.323095  530956 logs.go:282] 0 containers: []
	W1212 00:40:03.323102  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:40:03.323108  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:40:03.323165  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:40:03.348118  530956 cri.go:89] found id: ""
	I1212 00:40:03.348132  530956 logs.go:282] 0 containers: []
	W1212 00:40:03.348138  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:40:03.348144  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:40:03.348203  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:40:03.375333  530956 cri.go:89] found id: ""
	I1212 00:40:03.375346  530956 logs.go:282] 0 containers: []
	W1212 00:40:03.375353  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:40:03.375358  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:40:03.375418  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:40:03.401835  530956 cri.go:89] found id: ""
	I1212 00:40:03.401850  530956 logs.go:282] 0 containers: []
	W1212 00:40:03.401857  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:40:03.401864  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:40:03.401882  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:40:03.467887  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:40:03.459632   17271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:40:03.460370   17271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:40:03.461940   17271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:40:03.462285   17271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:40:03.463794   17271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:40:03.459632   17271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:40:03.460370   17271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:40:03.461940   17271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:40:03.462285   17271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:40:03.463794   17271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:40:03.467897  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:40:03.467907  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:40:03.536174  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:40:03.536194  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:40:03.564970  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:40:03.564985  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:40:03.632350  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:40:03.632369  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:40:06.147945  530956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:40:06.157971  530956 kubeadm.go:602] duration metric: took 4m2.720434125s to restartPrimaryControlPlane
	W1212 00:40:06.158027  530956 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1212 00:40:06.158103  530956 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1212 00:40:06.569482  530956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:40:06.582591  530956 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 00:40:06.590536  530956 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 00:40:06.590592  530956 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 00:40:06.598618  530956 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 00:40:06.598629  530956 kubeadm.go:158] found existing configuration files:
	
	I1212 00:40:06.598698  530956 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1212 00:40:06.606769  530956 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 00:40:06.606840  530956 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 00:40:06.614547  530956 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1212 00:40:06.622660  530956 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 00:40:06.622739  530956 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 00:40:06.630003  530956 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1212 00:40:06.638125  530956 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 00:40:06.638179  530956 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 00:40:06.645410  530956 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1212 00:40:06.652882  530956 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 00:40:06.652943  530956 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 00:40:06.660446  530956 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 00:40:06.700514  530956 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 00:40:06.700561  530956 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 00:40:06.776561  530956 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 00:40:06.776625  530956 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1212 00:40:06.776659  530956 kubeadm.go:319] OS: Linux
	I1212 00:40:06.776702  530956 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 00:40:06.776749  530956 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 00:40:06.776795  530956 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 00:40:06.776842  530956 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 00:40:06.776889  530956 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 00:40:06.776936  530956 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 00:40:06.776980  530956 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 00:40:06.777026  530956 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 00:40:06.777077  530956 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 00:40:06.848361  530956 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 00:40:06.848476  530956 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 00:40:06.848571  530956 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 00:40:06.858454  530956 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 00:40:06.861922  530956 out.go:252]   - Generating certificates and keys ...
	I1212 00:40:06.862039  530956 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 00:40:06.862113  530956 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 00:40:06.862184  530956 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 00:40:06.862240  530956 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1212 00:40:06.862305  530956 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 00:40:06.862362  530956 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1212 00:40:06.862420  530956 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1212 00:40:06.862477  530956 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1212 00:40:06.862546  530956 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 00:40:06.862613  530956 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 00:40:06.862665  530956 kubeadm.go:319] [certs] Using the existing "sa" key
	I1212 00:40:06.862736  530956 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 00:40:07.126544  530956 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 00:40:07.166854  530956 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 00:40:07.523509  530956 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 00:40:07.692785  530956 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 00:40:07.825726  530956 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 00:40:07.826395  530956 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 00:40:07.830778  530956 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 00:40:07.833963  530956 out.go:252]   - Booting up control plane ...
	I1212 00:40:07.834090  530956 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 00:40:07.834172  530956 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 00:40:07.835198  530956 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 00:40:07.850333  530956 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 00:40:07.850580  530956 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 00:40:07.857863  530956 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 00:40:07.858096  530956 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 00:40:07.858271  530956 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 00:40:07.986589  530956 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 00:40:07.986752  530956 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 00:44:07.988367  530956 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001882345s
	I1212 00:44:07.988392  530956 kubeadm.go:319] 
	I1212 00:44:07.988471  530956 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 00:44:07.988504  530956 kubeadm.go:319] 	- The kubelet is not running
	I1212 00:44:07.988626  530956 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 00:44:07.988630  530956 kubeadm.go:319] 
	I1212 00:44:07.988743  530956 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 00:44:07.988774  530956 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 00:44:07.988810  530956 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 00:44:07.988814  530956 kubeadm.go:319] 
	I1212 00:44:07.993727  530956 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1212 00:44:07.994213  530956 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1212 00:44:07.994355  530956 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 00:44:07.994630  530956 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1212 00:44:07.994638  530956 kubeadm.go:319] 
	I1212 00:44:07.994738  530956 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1212 00:44:07.994866  530956 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001882345s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1212 00:44:07.994955  530956 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1212 00:44:08.418732  530956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:44:08.431583  530956 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 00:44:08.431639  530956 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 00:44:08.439724  530956 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 00:44:08.439733  530956 kubeadm.go:158] found existing configuration files:
	
	I1212 00:44:08.439785  530956 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1212 00:44:08.447652  530956 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 00:44:08.447708  530956 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 00:44:08.454853  530956 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1212 00:44:08.462499  530956 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 00:44:08.462562  530956 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 00:44:08.470106  530956 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1212 00:44:08.477811  530956 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 00:44:08.477868  530956 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 00:44:08.485348  530956 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1212 00:44:08.493142  530956 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 00:44:08.493207  530956 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 00:44:08.501010  530956 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 00:44:08.619087  530956 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1212 00:44:08.619550  530956 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1212 00:44:08.685435  530956 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 00:48:10.247562  530956 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1212 00:48:10.247592  530956 kubeadm.go:319] 
	I1212 00:48:10.247688  530956 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1212 00:48:10.252292  530956 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 00:48:10.252346  530956 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 00:48:10.252445  530956 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 00:48:10.252500  530956 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1212 00:48:10.252533  530956 kubeadm.go:319] OS: Linux
	I1212 00:48:10.252577  530956 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 00:48:10.252624  530956 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 00:48:10.252670  530956 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 00:48:10.252716  530956 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 00:48:10.252768  530956 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 00:48:10.252816  530956 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 00:48:10.252859  530956 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 00:48:10.252906  530956 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 00:48:10.252951  530956 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 00:48:10.253023  530956 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 00:48:10.253117  530956 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 00:48:10.253205  530956 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 00:48:10.253277  530956 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 00:48:10.256411  530956 out.go:252]   - Generating certificates and keys ...
	I1212 00:48:10.256515  530956 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 00:48:10.256580  530956 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 00:48:10.256656  530956 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 00:48:10.256724  530956 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1212 00:48:10.256818  530956 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 00:48:10.256878  530956 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1212 00:48:10.256941  530956 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1212 00:48:10.257008  530956 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1212 00:48:10.257086  530956 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 00:48:10.257157  530956 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 00:48:10.257195  530956 kubeadm.go:319] [certs] Using the existing "sa" key
	I1212 00:48:10.257249  530956 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 00:48:10.257299  530956 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 00:48:10.257355  530956 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 00:48:10.257407  530956 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 00:48:10.257469  530956 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 00:48:10.257524  530956 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 00:48:10.257609  530956 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 00:48:10.257674  530956 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 00:48:10.260574  530956 out.go:252]   - Booting up control plane ...
	I1212 00:48:10.260690  530956 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 00:48:10.260801  530956 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 00:48:10.260876  530956 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 00:48:10.260981  530956 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 00:48:10.261102  530956 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 00:48:10.261235  530956 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 00:48:10.261332  530956 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 00:48:10.261377  530956 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 00:48:10.261506  530956 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 00:48:10.261614  530956 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 00:48:10.261707  530956 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000091689s
	I1212 00:48:10.261721  530956 kubeadm.go:319] 
	I1212 00:48:10.261778  530956 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 00:48:10.261809  530956 kubeadm.go:319] 	- The kubelet is not running
	I1212 00:48:10.261921  530956 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 00:48:10.261925  530956 kubeadm.go:319] 
	I1212 00:48:10.262045  530956 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 00:48:10.262083  530956 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 00:48:10.262112  530956 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 00:48:10.262133  530956 kubeadm.go:319] 
	I1212 00:48:10.262182  530956 kubeadm.go:403] duration metric: took 12m6.858628348s to StartCluster
	I1212 00:48:10.262232  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:48:10.262300  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:48:10.289138  530956 cri.go:89] found id: ""
	I1212 00:48:10.289156  530956 logs.go:282] 0 containers: []
	W1212 00:48:10.289163  530956 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:48:10.289168  530956 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:48:10.289230  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:48:10.317667  530956 cri.go:89] found id: ""
	I1212 00:48:10.317681  530956 logs.go:282] 0 containers: []
	W1212 00:48:10.317689  530956 logs.go:284] No container was found matching "etcd"
	I1212 00:48:10.317694  530956 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:48:10.317758  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:48:10.347070  530956 cri.go:89] found id: ""
	I1212 00:48:10.347083  530956 logs.go:282] 0 containers: []
	W1212 00:48:10.347091  530956 logs.go:284] No container was found matching "coredns"
	I1212 00:48:10.347096  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:48:10.347155  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:48:10.373637  530956 cri.go:89] found id: ""
	I1212 00:48:10.373650  530956 logs.go:282] 0 containers: []
	W1212 00:48:10.373658  530956 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:48:10.373663  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:48:10.373722  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:48:10.401060  530956 cri.go:89] found id: ""
	I1212 00:48:10.401074  530956 logs.go:282] 0 containers: []
	W1212 00:48:10.401081  530956 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:48:10.401086  530956 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:48:10.401146  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:48:10.426271  530956 cri.go:89] found id: ""
	I1212 00:48:10.426296  530956 logs.go:282] 0 containers: []
	W1212 00:48:10.426303  530956 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:48:10.426309  530956 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:48:10.426375  530956 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:48:10.451340  530956 cri.go:89] found id: ""
	I1212 00:48:10.451354  530956 logs.go:282] 0 containers: []
	W1212 00:48:10.451361  530956 logs.go:284] No container was found matching "kindnet"
	I1212 00:48:10.451369  530956 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:48:10.451379  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:48:10.526222  530956 logs.go:123] Gathering logs for container status ...
	I1212 00:48:10.526241  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:48:10.557574  530956 logs.go:123] Gathering logs for kubelet ...
	I1212 00:48:10.557591  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:48:10.627641  530956 logs.go:123] Gathering logs for dmesg ...
	I1212 00:48:10.627659  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:48:10.642797  530956 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:48:10.642812  530956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:48:10.704719  530956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:48:10.695841   21093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:48:10.696445   21093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:48:10.698250   21093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:48:10.698875   21093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:48:10.700463   21093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:48:10.695841   21093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:48:10.696445   21093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:48:10.698250   21093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:48:10.698875   21093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:48:10.700463   21093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	W1212 00:48:10.704732  530956 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000091689s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1212 00:48:10.704782  530956 out.go:285] * 
	W1212 00:48:10.704838  530956 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000091689s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 00:48:10.704854  530956 out.go:285] * 
	W1212 00:48:10.706992  530956 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 00:48:10.711878  530956 out.go:203] 
	W1212 00:48:10.714724  530956 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000091689s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 00:48:10.714773  530956 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1212 00:48:10.714793  530956 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1212 00:48:10.717973  530956 out.go:203] 
	
	
	==> CRI-O <==
	Dec 12 00:36:02 functional-035643 crio[9906]: time="2025-12-12T00:36:02.167671353Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 12 00:36:02 functional-035643 crio[9906]: time="2025-12-12T00:36:02.167708004Z" level=info msg="Starting seccomp notifier watcher"
	Dec 12 00:36:02 functional-035643 crio[9906]: time="2025-12-12T00:36:02.167752311Z" level=info msg="Create NRI interface"
	Dec 12 00:36:02 functional-035643 crio[9906]: time="2025-12-12T00:36:02.167872177Z" level=info msg="built-in NRI default validator is disabled"
	Dec 12 00:36:02 functional-035643 crio[9906]: time="2025-12-12T00:36:02.167882466Z" level=info msg="runtime interface created"
	Dec 12 00:36:02 functional-035643 crio[9906]: time="2025-12-12T00:36:02.167904357Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 12 00:36:02 functional-035643 crio[9906]: time="2025-12-12T00:36:02.167910363Z" level=info msg="runtime interface starting up..."
	Dec 12 00:36:02 functional-035643 crio[9906]: time="2025-12-12T00:36:02.167919183Z" level=info msg="starting plugins..."
	Dec 12 00:36:02 functional-035643 crio[9906]: time="2025-12-12T00:36:02.167936889Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 12 00:36:02 functional-035643 crio[9906]: time="2025-12-12T00:36:02.168004375Z" level=info msg="No systemd watchdog enabled"
	Dec 12 00:36:02 functional-035643 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 12 00:40:06 functional-035643 crio[9906]: time="2025-12-12T00:40:06.854632207Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=640da022-2edf-494c-a660-79e3ab919eba name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:40:06 functional-035643 crio[9906]: time="2025-12-12T00:40:06.855342483Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=673ecd0d-a1ac-45d5-bb90-3e1f04cdc90f name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:40:06 functional-035643 crio[9906]: time="2025-12-12T00:40:06.855810714Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=c57f39b7-fb58-4f67-bde4-1b55c2187b3f name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:40:06 functional-035643 crio[9906]: time="2025-12-12T00:40:06.856291532Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=a97cf7ab-fcf0-4971-8a79-d2c53b6e4ee5 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:40:06 functional-035643 crio[9906]: time="2025-12-12T00:40:06.856721905Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=01bb9bbf-51cf-478f-81f3-99ec7edffcf4 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:40:06 functional-035643 crio[9906]: time="2025-12-12T00:40:06.857120764Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=272d9706-6818-4f2e-bd33-95134bf8fb23 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:40:06 functional-035643 crio[9906]: time="2025-12-12T00:40:06.857524931Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=50b82acc-740c-444d-8ec5-a3c84ad4b6d2 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:44:08 functional-035643 crio[9906]: time="2025-12-12T00:44:08.688638675Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=4b143437-6a5c-4f02-b714-2d1bb8cb5a7a name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:44:08 functional-035643 crio[9906]: time="2025-12-12T00:44:08.689301235Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=54834de4-a19b-47fe-b478-123a3a9a03c9 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:44:08 functional-035643 crio[9906]: time="2025-12-12T00:44:08.689852011Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=a783bb1e-a84b-4d8e-b3d6-349f1b7407cf name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:44:08 functional-035643 crio[9906]: time="2025-12-12T00:44:08.690318153Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=d1830fa6-0c29-40cb-a67f-5512d68b4fbf name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:44:08 functional-035643 crio[9906]: time="2025-12-12T00:44:08.691052318Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=28af2ec8-e6ac-48d5-8255-6af4687f21e8 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:44:08 functional-035643 crio[9906]: time="2025-12-12T00:44:08.691575Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=e0b0d4f6-b48c-4765-bc0e-dcfd4d36d892 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:44:08 functional-035643 crio[9906]: time="2025-12-12T00:44:08.69204275Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=5eedd5b1-6dbb-4fbd-8ae6-426f470f128b name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:50:21.278752   22679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:50:21.279554   22679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:50:21.282267   22679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:50:21.282992   22679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:50:21.284631   22679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec11 23:45] hrtimer: interrupt took 13740716 ns
	[Dec12 00:10] kauditd_printk_skb: 8 callbacks suppressed
	[Dec12 00:11] overlayfs: idmapped layers are currently not supported
	[  +0.124336] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec12 00:17] overlayfs: idmapped layers are currently not supported
	[Dec12 00:18] overlayfs: idmapped layers are currently not supported
	[Dec12 00:35] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 00:50:21 up  3:32,  0 user,  load average: 1.03, 0.42, 0.50
	Linux functional-035643 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 12 00:50:19 functional-035643 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 00:50:19 functional-035643 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1133.
	Dec 12 00:50:19 functional-035643 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:50:19 functional-035643 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:50:19 functional-035643 kubelet[22573]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 00:50:19 functional-035643 kubelet[22573]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 00:50:19 functional-035643 kubelet[22573]: E1212 00:50:19.780534   22573 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 00:50:19 functional-035643 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 00:50:19 functional-035643 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 00:50:20 functional-035643 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1134.
	Dec 12 00:50:20 functional-035643 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:50:20 functional-035643 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:50:20 functional-035643 kubelet[22594]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 00:50:20 functional-035643 kubelet[22594]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 00:50:20 functional-035643 kubelet[22594]: E1212 00:50:20.544540   22594 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 00:50:20 functional-035643 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 00:50:20 functional-035643 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 00:50:21 functional-035643 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1135.
	Dec 12 00:50:21 functional-035643 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:50:21 functional-035643 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:50:21 functional-035643 kubelet[22683]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 00:50:21 functional-035643 kubelet[22683]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 00:50:21 functional-035643 kubelet[22683]: E1212 00:50:21.278037   22683 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 00:50:21 functional-035643 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 00:50:21 functional-035643 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-035643 -n functional-035643
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-035643 -n functional-035643: exit status 2 (368.559549ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-035643" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (2.44s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (241.64s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1212 00:48:28.796968  490954 retry.go:31] will retry after 3.859787152s: Temporary Error: Get "http://10.99.222.93": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
E1212 00:48:33.591373  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/addons-199484/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1212 00:48:42.657775  490954 retry.go:31] will retry after 5.84337675s: Temporary Error: Get "http://10.99.222.93": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1212 00:48:58.503162  490954 retry.go:31] will retry after 5.201601646s: Temporary Error: Get "http://10.99.222.93": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1212 00:49:13.705704  490954 retry.go:31] will retry after 11.587127638s: Temporary Error: Get "http://10.99.222.93": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1212 00:49:35.293690  490954 retry.go:31] will retry after 8.864841329s: Temporary Error: Get "http://10.99.222.93": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1212 00:49:54.159284  490954 retry.go:31] will retry after 15.310385006s: Temporary Error: Get "http://10.99.222.93": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
E1212 00:50:17.604600  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-921447/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
E1212 00:51:36.663552  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/addons-199484/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
functional_test_pvc_test.go:50: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: pod "integration-test=storage-provisioner" failed to start within 4m0s: context deadline exceeded ****
functional_test_pvc_test.go:50: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-035643 -n functional-035643
functional_test_pvc_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-035643 -n functional-035643: exit status 2 (319.920407ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
functional_test_pvc_test.go:50: status error: exit status 2 (may be ok)
functional_test_pvc_test.go:50: "functional-035643" apiserver is not running, skipping kubectl commands (state="Stopped")
functional_test_pvc_test.go:51: failed waiting for storage-provisioner: integration-test=storage-provisioner within 4m0s: context deadline exceeded
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-035643
helpers_test.go:244: (dbg) docker inspect functional-035643:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "02b8c8e636a5f9c9930bd279101be257c50eb00805c3c8fd0285e10206ff115a",
	        "Created": "2025-12-12T00:21:16.539894649Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 519641,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T00:21:16.600605162Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/02b8c8e636a5f9c9930bd279101be257c50eb00805c3c8fd0285e10206ff115a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/02b8c8e636a5f9c9930bd279101be257c50eb00805c3c8fd0285e10206ff115a/hostname",
	        "HostsPath": "/var/lib/docker/containers/02b8c8e636a5f9c9930bd279101be257c50eb00805c3c8fd0285e10206ff115a/hosts",
	        "LogPath": "/var/lib/docker/containers/02b8c8e636a5f9c9930bd279101be257c50eb00805c3c8fd0285e10206ff115a/02b8c8e636a5f9c9930bd279101be257c50eb00805c3c8fd0285e10206ff115a-json.log",
	        "Name": "/functional-035643",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-035643:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-035643",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "02b8c8e636a5f9c9930bd279101be257c50eb00805c3c8fd0285e10206ff115a",
	                "LowerDir": "/var/lib/docker/overlay2/fac4f7ad025123b29aaa4f718217dd9d72b542fc9949b8afc2ffc326afc38542-init/diff:/var/lib/docker/overlay2/312acdcca8c5c90ada236fa0dd866f841348e5b8485928af37d3628cccc20197/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fac4f7ad025123b29aaa4f718217dd9d72b542fc9949b8afc2ffc326afc38542/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fac4f7ad025123b29aaa4f718217dd9d72b542fc9949b8afc2ffc326afc38542/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fac4f7ad025123b29aaa4f718217dd9d72b542fc9949b8afc2ffc326afc38542/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-035643",
	                "Source": "/var/lib/docker/volumes/functional-035643/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-035643",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-035643",
	                "name.minikube.sigs.k8s.io": "functional-035643",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ede6a17442d6bf83b8f4c9f93f252345cec3d0406f82de2d6bd2cfd4713e2163",
	            "SandboxKey": "/var/run/docker/netns/ede6a17442d6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33183"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33184"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33187"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33185"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33186"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-035643": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "52:d5:12:89:ea:40",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ad01995b183fdebead6c725e2b942ae8ce2d3964b3552789fe5b50ee7e7239a3",
	                    "EndpointID": "d429a1cd0f840d042af4ad7ea0bda6067a342be7fb552083411004a3604b0124",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-035643",
	                        "02b8c8e636a5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-035643 -n functional-035643
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-035643 -n functional-035643: exit status 2 (301.172055ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-035643 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                           ARGS                                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-035643 image load --daemon kicbase/echo-server:functional-035643 --alsologtostderr                                                             │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │ 12 Dec 25 00:50 UTC │
	│ image          │ functional-035643 image ls                                                                                                                                │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │ 12 Dec 25 00:50 UTC │
	│ image          │ functional-035643 image save kicbase/echo-server:functional-035643 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │ 12 Dec 25 00:50 UTC │
	│ image          │ functional-035643 image rm kicbase/echo-server:functional-035643 --alsologtostderr                                                                        │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │ 12 Dec 25 00:50 UTC │
	│ image          │ functional-035643 image ls                                                                                                                                │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │ 12 Dec 25 00:50 UTC │
	│ image          │ functional-035643 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │ 12 Dec 25 00:50 UTC │
	│ image          │ functional-035643 image ls                                                                                                                                │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │ 12 Dec 25 00:50 UTC │
	│ image          │ functional-035643 image save --daemon kicbase/echo-server:functional-035643 --alsologtostderr                                                             │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │ 12 Dec 25 00:50 UTC │
	│ ssh            │ functional-035643 ssh sudo cat /etc/ssl/certs/490954.pem                                                                                                  │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │ 12 Dec 25 00:50 UTC │
	│ ssh            │ functional-035643 ssh sudo cat /usr/share/ca-certificates/490954.pem                                                                                      │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │ 12 Dec 25 00:50 UTC │
	│ ssh            │ functional-035643 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                  │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │ 12 Dec 25 00:50 UTC │
	│ ssh            │ functional-035643 ssh sudo cat /etc/ssl/certs/4909542.pem                                                                                                 │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │ 12 Dec 25 00:50 UTC │
	│ ssh            │ functional-035643 ssh sudo cat /usr/share/ca-certificates/4909542.pem                                                                                     │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │ 12 Dec 25 00:50 UTC │
	│ ssh            │ functional-035643 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                  │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │ 12 Dec 25 00:50 UTC │
	│ ssh            │ functional-035643 ssh sudo cat /etc/test/nested/copy/490954/hosts                                                                                         │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │ 12 Dec 25 00:50 UTC │
	│ image          │ functional-035643 image ls --format short --alsologtostderr                                                                                               │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │ 12 Dec 25 00:50 UTC │
	│ image          │ functional-035643 image ls --format yaml --alsologtostderr                                                                                                │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │ 12 Dec 25 00:50 UTC │
	│ ssh            │ functional-035643 ssh pgrep buildkitd                                                                                                                     │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │                     │
	│ image          │ functional-035643 image build -t localhost/my-image:functional-035643 testdata/build --alsologtostderr                                                    │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │ 12 Dec 25 00:50 UTC │
	│ image          │ functional-035643 image ls                                                                                                                                │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │ 12 Dec 25 00:50 UTC │
	│ image          │ functional-035643 image ls --format json --alsologtostderr                                                                                                │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │ 12 Dec 25 00:50 UTC │
	│ image          │ functional-035643 image ls --format table --alsologtostderr                                                                                               │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │ 12 Dec 25 00:50 UTC │
	│ update-context │ functional-035643 update-context --alsologtostderr -v=2                                                                                                   │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │ 12 Dec 25 00:50 UTC │
	│ update-context │ functional-035643 update-context --alsologtostderr -v=2                                                                                                   │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │ 12 Dec 25 00:50 UTC │
	│ update-context │ functional-035643 update-context --alsologtostderr -v=2                                                                                                   │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │ 12 Dec 25 00:50 UTC │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 00:50:35
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 00:50:35.555198  548254 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:50:35.555312  548254 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:50:35.555323  548254 out.go:374] Setting ErrFile to fd 2...
	I1212 00:50:35.555329  548254 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:50:35.555588  548254 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-487723/.minikube/bin
	I1212 00:50:35.555945  548254 out.go:368] Setting JSON to false
	I1212 00:50:35.556827  548254 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":12781,"bootTime":1765487855,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1212 00:50:35.556898  548254 start.go:143] virtualization:  
	I1212 00:50:35.560185  548254 out.go:179] * [functional-035643] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 00:50:35.563287  548254 out.go:179]   - MINIKUBE_LOCATION=22101
	I1212 00:50:35.563365  548254 notify.go:221] Checking for updates...
	I1212 00:50:35.569618  548254 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:50:35.572462  548254 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22101-487723/kubeconfig
	I1212 00:50:35.575265  548254 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-487723/.minikube
	I1212 00:50:35.577992  548254 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 00:50:35.580906  548254 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:50:35.584154  548254 config.go:182] Loaded profile config "functional-035643": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 00:50:35.584731  548254 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 00:50:35.616876  548254 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 00:50:35.617061  548254 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:50:35.679448  548254 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 00:50:35.670251588 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 00:50:35.679572  548254 docker.go:319] overlay module found
	I1212 00:50:35.682613  548254 out.go:179] * Using the docker driver based on existing profile
	I1212 00:50:35.685434  548254 start.go:309] selected driver: docker
	I1212 00:50:35.685453  548254 start.go:927] validating driver "docker" against &{Name:functional-035643 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-035643 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:50:35.685551  548254 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:50:35.685664  548254 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:50:35.739314  548254 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 00:50:35.730561685 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 00:50:35.739752  548254 cni.go:84] Creating CNI manager for ""
	I1212 00:50:35.739813  548254 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 00:50:35.739854  548254 start.go:353] cluster config:
	{Name:functional-035643 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-035643 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:50:35.742928  548254 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Dec 12 00:44:08 functional-035643 crio[9906]: time="2025-12-12T00:44:08.688638675Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=4b143437-6a5c-4f02-b714-2d1bb8cb5a7a name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:44:08 functional-035643 crio[9906]: time="2025-12-12T00:44:08.689301235Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=54834de4-a19b-47fe-b478-123a3a9a03c9 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:44:08 functional-035643 crio[9906]: time="2025-12-12T00:44:08.689852011Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=a783bb1e-a84b-4d8e-b3d6-349f1b7407cf name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:44:08 functional-035643 crio[9906]: time="2025-12-12T00:44:08.690318153Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=d1830fa6-0c29-40cb-a67f-5512d68b4fbf name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:44:08 functional-035643 crio[9906]: time="2025-12-12T00:44:08.691052318Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=28af2ec8-e6ac-48d5-8255-6af4687f21e8 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:44:08 functional-035643 crio[9906]: time="2025-12-12T00:44:08.691575Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=e0b0d4f6-b48c-4765-bc0e-dcfd4d36d892 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:44:08 functional-035643 crio[9906]: time="2025-12-12T00:44:08.69204275Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=5eedd5b1-6dbb-4fbd-8ae6-426f470f128b name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:50:39 functional-035643 crio[9906]: time="2025-12-12T00:50:39.158263667Z" level=info msg="Checking image status: kicbase/echo-server:functional-035643" id=88368e0c-235e-4cc6-bed6-b3f33af5da8e name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:50:39 functional-035643 crio[9906]: time="2025-12-12T00:50:39.158468758Z" level=info msg="Resolving \"kicbase/echo-server\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 12 00:50:39 functional-035643 crio[9906]: time="2025-12-12T00:50:39.15852625Z" level=info msg="Image kicbase/echo-server:functional-035643 not found" id=88368e0c-235e-4cc6-bed6-b3f33af5da8e name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:50:39 functional-035643 crio[9906]: time="2025-12-12T00:50:39.158606716Z" level=info msg="Neither image nor artfiact kicbase/echo-server:functional-035643 found" id=88368e0c-235e-4cc6-bed6-b3f33af5da8e name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:50:39 functional-035643 crio[9906]: time="2025-12-12T00:50:39.184553436Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-035643" id=dd5fa83b-4dd9-4a66-8f82-d4ae5cc0a49c name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:50:39 functional-035643 crio[9906]: time="2025-12-12T00:50:39.184700961Z" level=info msg="Image docker.io/kicbase/echo-server:functional-035643 not found" id=dd5fa83b-4dd9-4a66-8f82-d4ae5cc0a49c name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:50:39 functional-035643 crio[9906]: time="2025-12-12T00:50:39.184740468Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-035643 found" id=dd5fa83b-4dd9-4a66-8f82-d4ae5cc0a49c name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:50:39 functional-035643 crio[9906]: time="2025-12-12T00:50:39.209596371Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-035643" id=04ef7d6a-d863-4275-89f4-870680e8d453 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:50:39 functional-035643 crio[9906]: time="2025-12-12T00:50:39.209732909Z" level=info msg="Image localhost/kicbase/echo-server:functional-035643 not found" id=04ef7d6a-d863-4275-89f4-870680e8d453 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:50:39 functional-035643 crio[9906]: time="2025-12-12T00:50:39.209771063Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-035643 found" id=04ef7d6a-d863-4275-89f4-870680e8d453 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:50:42 functional-035643 crio[9906]: time="2025-12-12T00:50:42.314049882Z" level=info msg="Checking image status: kicbase/echo-server:functional-035643" id=7b44d7f8-9a03-4c8e-b8f0-b3794ebe511a name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:50:42 functional-035643 crio[9906]: time="2025-12-12T00:50:42.314266723Z" level=info msg="Resolving \"kicbase/echo-server\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 12 00:50:42 functional-035643 crio[9906]: time="2025-12-12T00:50:42.314328153Z" level=info msg="Image kicbase/echo-server:functional-035643 not found" id=7b44d7f8-9a03-4c8e-b8f0-b3794ebe511a name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:50:42 functional-035643 crio[9906]: time="2025-12-12T00:50:42.314423987Z" level=info msg="Neither image nor artfiact kicbase/echo-server:functional-035643 found" id=7b44d7f8-9a03-4c8e-b8f0-b3794ebe511a name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:50:42 functional-035643 crio[9906]: time="2025-12-12T00:50:42.342084879Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-035643" id=6744e391-6e9c-4d1e-b32b-9bdf44020766 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:50:42 functional-035643 crio[9906]: time="2025-12-12T00:50:42.342289535Z" level=info msg="Image docker.io/kicbase/echo-server:functional-035643 not found" id=6744e391-6e9c-4d1e-b32b-9bdf44020766 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:50:42 functional-035643 crio[9906]: time="2025-12-12T00:50:42.342348808Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-035643 found" id=6744e391-6e9c-4d1e-b32b-9bdf44020766 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:50:42 functional-035643 crio[9906]: time="2025-12-12T00:50:42.369193182Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-035643" id=4237a5e8-276b-407b-8794-f7aaa21518e0 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:52:20.486524   25333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:52:20.487342   25333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:52:20.488891   25333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:52:20.489348   25333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:52:20.490970   25333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec11 23:45] hrtimer: interrupt took 13740716 ns
	[Dec12 00:10] kauditd_printk_skb: 8 callbacks suppressed
	[Dec12 00:11] overlayfs: idmapped layers are currently not supported
	[  +0.124336] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec12 00:17] overlayfs: idmapped layers are currently not supported
	[Dec12 00:18] overlayfs: idmapped layers are currently not supported
	[Dec12 00:35] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 00:52:20 up  3:34,  0 user,  load average: 0.61, 0.51, 0.53
	Linux functional-035643 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 12 00:52:17 functional-035643 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 00:52:18 functional-035643 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1291.
	Dec 12 00:52:18 functional-035643 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:52:18 functional-035643 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:52:18 functional-035643 kubelet[25209]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 00:52:18 functional-035643 kubelet[25209]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 00:52:18 functional-035643 kubelet[25209]: E1212 00:52:18.518184   25209 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 00:52:18 functional-035643 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 00:52:18 functional-035643 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 00:52:19 functional-035643 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1292.
	Dec 12 00:52:19 functional-035643 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:52:19 functional-035643 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:52:19 functional-035643 kubelet[25214]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 00:52:19 functional-035643 kubelet[25214]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 00:52:19 functional-035643 kubelet[25214]: E1212 00:52:19.277392   25214 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 00:52:19 functional-035643 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 00:52:19 functional-035643 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 00:52:19 functional-035643 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1293.
	Dec 12 00:52:19 functional-035643 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:52:19 functional-035643 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:52:20 functional-035643 kubelet[25250]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 00:52:20 functional-035643 kubelet[25250]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 00:52:20 functional-035643 kubelet[25250]: E1212 00:52:20.038269   25250 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 00:52:20 functional-035643 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 00:52:20 functional-035643 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-035643 -n functional-035643
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-035643 -n functional-035643: exit status 2 (306.362149ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-035643" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (241.64s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (1.56s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-035643 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:234: (dbg) Non-zero exit: kubectl --context functional-035643 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (63.939215ms)

                                                
                                                
** stderr ** 
	E1212 00:50:43.871460  549465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1212 00:50:43.873071  549465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1212 00:50:43.874557  549465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1212 00:50:43.876051  549465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1212 00:50:43.877541  549465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:236: failed to 'kubectl get nodes' with args "kubectl --context functional-035643 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:242: expected to have label "minikube.k8s.io/commit" in node labels but got : 
** stderr ** 
	E1212 00:50:43.871460  549465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1212 00:50:43.873071  549465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1212 00:50:43.874557  549465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1212 00:50:43.876051  549465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1212 00:50:43.877541  549465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/version" in node labels but got : 
** stderr ** 
	E1212 00:50:43.871460  549465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1212 00:50:43.873071  549465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1212 00:50:43.874557  549465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1212 00:50:43.876051  549465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1212 00:50:43.877541  549465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
** stderr ** 
	E1212 00:50:43.871460  549465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1212 00:50:43.873071  549465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1212 00:50:43.874557  549465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1212 00:50:43.876051  549465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1212 00:50:43.877541  549465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/name" in node labels but got : 
** stderr ** 
	E1212 00:50:43.871460  549465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1212 00:50:43.873071  549465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1212 00:50:43.874557  549465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1212 00:50:43.876051  549465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1212 00:50:43.877541  549465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/primary" in node labels but got : 
** stderr ** 
	E1212 00:50:43.871460  549465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1212 00:50:43.873071  549465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1212 00:50:43.874557  549465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1212 00:50:43.876051  549465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1212 00:50:43.877541  549465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-035643
helpers_test.go:244: (dbg) docker inspect functional-035643:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "02b8c8e636a5f9c9930bd279101be257c50eb00805c3c8fd0285e10206ff115a",
	        "Created": "2025-12-12T00:21:16.539894649Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 519641,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T00:21:16.600605162Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/02b8c8e636a5f9c9930bd279101be257c50eb00805c3c8fd0285e10206ff115a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/02b8c8e636a5f9c9930bd279101be257c50eb00805c3c8fd0285e10206ff115a/hostname",
	        "HostsPath": "/var/lib/docker/containers/02b8c8e636a5f9c9930bd279101be257c50eb00805c3c8fd0285e10206ff115a/hosts",
	        "LogPath": "/var/lib/docker/containers/02b8c8e636a5f9c9930bd279101be257c50eb00805c3c8fd0285e10206ff115a/02b8c8e636a5f9c9930bd279101be257c50eb00805c3c8fd0285e10206ff115a-json.log",
	        "Name": "/functional-035643",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-035643:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-035643",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "02b8c8e636a5f9c9930bd279101be257c50eb00805c3c8fd0285e10206ff115a",
	                "LowerDir": "/var/lib/docker/overlay2/fac4f7ad025123b29aaa4f718217dd9d72b542fc9949b8afc2ffc326afc38542-init/diff:/var/lib/docker/overlay2/312acdcca8c5c90ada236fa0dd866f841348e5b8485928af37d3628cccc20197/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fac4f7ad025123b29aaa4f718217dd9d72b542fc9949b8afc2ffc326afc38542/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fac4f7ad025123b29aaa4f718217dd9d72b542fc9949b8afc2ffc326afc38542/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fac4f7ad025123b29aaa4f718217dd9d72b542fc9949b8afc2ffc326afc38542/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-035643",
	                "Source": "/var/lib/docker/volumes/functional-035643/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-035643",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-035643",
	                "name.minikube.sigs.k8s.io": "functional-035643",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ede6a17442d6bf83b8f4c9f93f252345cec3d0406f82de2d6bd2cfd4713e2163",
	            "SandboxKey": "/var/run/docker/netns/ede6a17442d6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33183"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33184"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33187"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33185"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33186"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-035643": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "52:d5:12:89:ea:40",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ad01995b183fdebead6c725e2b942ae8ce2d3964b3552789fe5b50ee7e7239a3",
	                    "EndpointID": "d429a1cd0f840d042af4ad7ea0bda6067a342be7fb552083411004a3604b0124",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-035643",
	                        "02b8c8e636a5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-035643 -n functional-035643
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-035643 -n functional-035643: exit status 2 (329.345065ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-035643 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                           ARGS                                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ mount     │ -p functional-035643 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2477175357/001:/mount1 --alsologtostderr -v=1                      │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │                     │
	│ mount     │ -p functional-035643 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2477175357/001:/mount3 --alsologtostderr -v=1                      │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │                     │
	│ ssh       │ functional-035643 ssh findmnt -T /mount1                                                                                                                  │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │ 12 Dec 25 00:50 UTC │
	│ ssh       │ functional-035643 ssh findmnt -T /mount2                                                                                                                  │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │ 12 Dec 25 00:50 UTC │
	│ ssh       │ functional-035643 ssh findmnt -T /mount3                                                                                                                  │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │ 12 Dec 25 00:50 UTC │
	│ mount     │ -p functional-035643 --kill=true                                                                                                                          │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │                     │
	│ start     │ -p functional-035643 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0             │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │                     │
	│ start     │ -p functional-035643 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0             │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │                     │
	│ start     │ -p functional-035643 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                       │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-035643 --alsologtostderr -v=1                                                                                            │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │                     │
	│ license   │                                                                                                                                                           │ minikube          │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │ 12 Dec 25 00:50 UTC │
	│ ssh       │ functional-035643 ssh sudo systemctl is-active docker                                                                                                     │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │                     │
	│ ssh       │ functional-035643 ssh sudo systemctl is-active containerd                                                                                                 │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │                     │
	│ image     │ functional-035643 image load --daemon kicbase/echo-server:functional-035643 --alsologtostderr                                                             │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │ 12 Dec 25 00:50 UTC │
	│ image     │ functional-035643 image ls                                                                                                                                │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │ 12 Dec 25 00:50 UTC │
	│ image     │ functional-035643 image load --daemon kicbase/echo-server:functional-035643 --alsologtostderr                                                             │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │ 12 Dec 25 00:50 UTC │
	│ image     │ functional-035643 image ls                                                                                                                                │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │ 12 Dec 25 00:50 UTC │
	│ image     │ functional-035643 image load --daemon kicbase/echo-server:functional-035643 --alsologtostderr                                                             │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │ 12 Dec 25 00:50 UTC │
	│ image     │ functional-035643 image ls                                                                                                                                │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │ 12 Dec 25 00:50 UTC │
	│ image     │ functional-035643 image save kicbase/echo-server:functional-035643 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │ 12 Dec 25 00:50 UTC │
	│ image     │ functional-035643 image rm kicbase/echo-server:functional-035643 --alsologtostderr                                                                        │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │ 12 Dec 25 00:50 UTC │
	│ image     │ functional-035643 image ls                                                                                                                                │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │ 12 Dec 25 00:50 UTC │
	│ image     │ functional-035643 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │ 12 Dec 25 00:50 UTC │
	│ image     │ functional-035643 image ls                                                                                                                                │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │ 12 Dec 25 00:50 UTC │
	│ image     │ functional-035643 image save --daemon kicbase/echo-server:functional-035643 --alsologtostderr                                                             │ functional-035643 │ jenkins │ v1.37.0 │ 12 Dec 25 00:50 UTC │ 12 Dec 25 00:50 UTC │
	└───────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 00:50:35
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 00:50:35.555198  548254 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:50:35.555312  548254 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:50:35.555323  548254 out.go:374] Setting ErrFile to fd 2...
	I1212 00:50:35.555329  548254 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:50:35.555588  548254 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-487723/.minikube/bin
	I1212 00:50:35.555945  548254 out.go:368] Setting JSON to false
	I1212 00:50:35.556827  548254 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":12781,"bootTime":1765487855,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1212 00:50:35.556898  548254 start.go:143] virtualization:  
	I1212 00:50:35.560185  548254 out.go:179] * [functional-035643] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 00:50:35.563287  548254 out.go:179]   - MINIKUBE_LOCATION=22101
	I1212 00:50:35.563365  548254 notify.go:221] Checking for updates...
	I1212 00:50:35.569618  548254 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:50:35.572462  548254 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22101-487723/kubeconfig
	I1212 00:50:35.575265  548254 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-487723/.minikube
	I1212 00:50:35.577992  548254 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 00:50:35.580906  548254 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:50:35.584154  548254 config.go:182] Loaded profile config "functional-035643": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 00:50:35.584731  548254 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 00:50:35.616876  548254 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 00:50:35.617061  548254 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:50:35.679448  548254 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 00:50:35.670251588 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 00:50:35.679572  548254 docker.go:319] overlay module found
	I1212 00:50:35.682613  548254 out.go:179] * Using the docker driver based on existing profile
	I1212 00:50:35.685434  548254 start.go:309] selected driver: docker
	I1212 00:50:35.685453  548254 start.go:927] validating driver "docker" against &{Name:functional-035643 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-035643 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:50:35.685551  548254 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:50:35.685664  548254 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:50:35.739314  548254 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 00:50:35.730561685 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 00:50:35.739752  548254 cni.go:84] Creating CNI manager for ""
	I1212 00:50:35.739813  548254 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 00:50:35.739854  548254 start.go:353] cluster config:
	{Name:functional-035643 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-035643 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:50:35.742928  548254 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Dec 12 00:44:08 functional-035643 crio[9906]: time="2025-12-12T00:44:08.688638675Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=4b143437-6a5c-4f02-b714-2d1bb8cb5a7a name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:44:08 functional-035643 crio[9906]: time="2025-12-12T00:44:08.689301235Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=54834de4-a19b-47fe-b478-123a3a9a03c9 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:44:08 functional-035643 crio[9906]: time="2025-12-12T00:44:08.689852011Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=a783bb1e-a84b-4d8e-b3d6-349f1b7407cf name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:44:08 functional-035643 crio[9906]: time="2025-12-12T00:44:08.690318153Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=d1830fa6-0c29-40cb-a67f-5512d68b4fbf name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:44:08 functional-035643 crio[9906]: time="2025-12-12T00:44:08.691052318Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=28af2ec8-e6ac-48d5-8255-6af4687f21e8 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:44:08 functional-035643 crio[9906]: time="2025-12-12T00:44:08.691575Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=e0b0d4f6-b48c-4765-bc0e-dcfd4d36d892 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:44:08 functional-035643 crio[9906]: time="2025-12-12T00:44:08.69204275Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=5eedd5b1-6dbb-4fbd-8ae6-426f470f128b name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:50:39 functional-035643 crio[9906]: time="2025-12-12T00:50:39.158263667Z" level=info msg="Checking image status: kicbase/echo-server:functional-035643" id=88368e0c-235e-4cc6-bed6-b3f33af5da8e name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:50:39 functional-035643 crio[9906]: time="2025-12-12T00:50:39.158468758Z" level=info msg="Resolving \"kicbase/echo-server\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 12 00:50:39 functional-035643 crio[9906]: time="2025-12-12T00:50:39.15852625Z" level=info msg="Image kicbase/echo-server:functional-035643 not found" id=88368e0c-235e-4cc6-bed6-b3f33af5da8e name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:50:39 functional-035643 crio[9906]: time="2025-12-12T00:50:39.158606716Z" level=info msg="Neither image nor artfiact kicbase/echo-server:functional-035643 found" id=88368e0c-235e-4cc6-bed6-b3f33af5da8e name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:50:39 functional-035643 crio[9906]: time="2025-12-12T00:50:39.184553436Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-035643" id=dd5fa83b-4dd9-4a66-8f82-d4ae5cc0a49c name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:50:39 functional-035643 crio[9906]: time="2025-12-12T00:50:39.184700961Z" level=info msg="Image docker.io/kicbase/echo-server:functional-035643 not found" id=dd5fa83b-4dd9-4a66-8f82-d4ae5cc0a49c name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:50:39 functional-035643 crio[9906]: time="2025-12-12T00:50:39.184740468Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-035643 found" id=dd5fa83b-4dd9-4a66-8f82-d4ae5cc0a49c name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:50:39 functional-035643 crio[9906]: time="2025-12-12T00:50:39.209596371Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-035643" id=04ef7d6a-d863-4275-89f4-870680e8d453 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:50:39 functional-035643 crio[9906]: time="2025-12-12T00:50:39.209732909Z" level=info msg="Image localhost/kicbase/echo-server:functional-035643 not found" id=04ef7d6a-d863-4275-89f4-870680e8d453 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:50:39 functional-035643 crio[9906]: time="2025-12-12T00:50:39.209771063Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-035643 found" id=04ef7d6a-d863-4275-89f4-870680e8d453 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:50:42 functional-035643 crio[9906]: time="2025-12-12T00:50:42.314049882Z" level=info msg="Checking image status: kicbase/echo-server:functional-035643" id=7b44d7f8-9a03-4c8e-b8f0-b3794ebe511a name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:50:42 functional-035643 crio[9906]: time="2025-12-12T00:50:42.314266723Z" level=info msg="Resolving \"kicbase/echo-server\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 12 00:50:42 functional-035643 crio[9906]: time="2025-12-12T00:50:42.314328153Z" level=info msg="Image kicbase/echo-server:functional-035643 not found" id=7b44d7f8-9a03-4c8e-b8f0-b3794ebe511a name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:50:42 functional-035643 crio[9906]: time="2025-12-12T00:50:42.314423987Z" level=info msg="Neither image nor artfiact kicbase/echo-server:functional-035643 found" id=7b44d7f8-9a03-4c8e-b8f0-b3794ebe511a name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:50:42 functional-035643 crio[9906]: time="2025-12-12T00:50:42.342084879Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-035643" id=6744e391-6e9c-4d1e-b32b-9bdf44020766 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:50:42 functional-035643 crio[9906]: time="2025-12-12T00:50:42.342289535Z" level=info msg="Image docker.io/kicbase/echo-server:functional-035643 not found" id=6744e391-6e9c-4d1e-b32b-9bdf44020766 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:50:42 functional-035643 crio[9906]: time="2025-12-12T00:50:42.342348808Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-035643 found" id=6744e391-6e9c-4d1e-b32b-9bdf44020766 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:50:42 functional-035643 crio[9906]: time="2025-12-12T00:50:42.369193182Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-035643" id=4237a5e8-276b-407b-8794-f7aaa21518e0 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:50:44.817408   24034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:50:44.818917   24034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:50:44.819803   24034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:50:44.821479   24034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:50:44.821758   24034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec11 23:45] hrtimer: interrupt took 13740716 ns
	[Dec12 00:10] kauditd_printk_skb: 8 callbacks suppressed
	[Dec12 00:11] overlayfs: idmapped layers are currently not supported
	[  +0.124336] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec12 00:17] overlayfs: idmapped layers are currently not supported
	[Dec12 00:18] overlayfs: idmapped layers are currently not supported
	[Dec12 00:35] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 00:50:44 up  3:33,  0 user,  load average: 1.22, 0.50, 0.53
	Linux functional-035643 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 12 00:50:42 functional-035643 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 00:50:42 functional-035643 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1164.
	Dec 12 00:50:42 functional-035643 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:50:42 functional-035643 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:50:43 functional-035643 kubelet[23866]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 00:50:43 functional-035643 kubelet[23866]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 00:50:43 functional-035643 kubelet[23866]: E1212 00:50:43.035449   23866 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 00:50:43 functional-035643 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 00:50:43 functional-035643 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 00:50:43 functional-035643 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1165.
	Dec 12 00:50:43 functional-035643 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:50:43 functional-035643 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:50:43 functional-035643 kubelet[23928]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 00:50:43 functional-035643 kubelet[23928]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 00:50:43 functional-035643 kubelet[23928]: E1212 00:50:43.744780   23928 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 00:50:43 functional-035643 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 00:50:43 functional-035643 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 00:50:44 functional-035643 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1166.
	Dec 12 00:50:44 functional-035643 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:50:44 functional-035643 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:50:44 functional-035643 kubelet[23958]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 00:50:44 functional-035643 kubelet[23958]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 00:50:44 functional-035643 kubelet[23958]: E1212 00:50:44.523535   23958 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 00:50:44 functional-035643 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 00:50:44 functional-035643 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-035643 -n functional-035643
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-035643 -n functional-035643: exit status 2 (465.683841ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-035643" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (1.56s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.53s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-035643 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-035643 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 103. stderr: I1212 00:48:18.201680  544003 out.go:360] Setting OutFile to fd 1 ...
I1212 00:48:18.201914  544003 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 00:48:18.201942  544003 out.go:374] Setting ErrFile to fd 2...
I1212 00:48:18.201964  544003 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 00:48:18.202274  544003 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-487723/.minikube/bin
I1212 00:48:18.202637  544003 mustload.go:66] Loading cluster: functional-035643
I1212 00:48:18.203140  544003 config.go:182] Loaded profile config "functional-035643": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1212 00:48:18.203639  544003 cli_runner.go:164] Run: docker container inspect functional-035643 --format={{.State.Status}}
I1212 00:48:18.233125  544003 host.go:66] Checking if "functional-035643" exists ...
I1212 00:48:18.233461  544003 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1212 00:48:18.363767  544003 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 00:48:18.353909938 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1212 00:48:18.363881  544003 api_server.go:166] Checking apiserver status ...
I1212 00:48:18.363940  544003 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1212 00:48:18.363985  544003 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
I1212 00:48:18.398059  544003 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/functional-035643/id_rsa Username:docker}
W1212 00:48:18.541032  544003 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1212 00:48:18.544410  544003 out.go:179] * The control-plane node functional-035643 apiserver is not running: (state=Stopped)
I1212 00:48:18.550865  544003 out.go:179]   To start a cluster, run: "minikube start -p functional-035643"

                                                
                                                
stdout: * The control-plane node functional-035643 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-035643"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-035643 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 544004: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-035643 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-035643 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-035643 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-035643 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-035643 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.53s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (0.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-035643 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:212: (dbg) Non-zero exit: kubectl --context functional-035643 apply -f testdata/testsvc.yaml: exit status 1 (127.282612ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/testsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:214: kubectl --context functional-035643 apply -f testdata/testsvc.yaml failed: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (0.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (120.74s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://10.99.222.93": Temporary Error: Get "http://10.99.222.93": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-035643 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-035643 get svc nginx-svc: exit status 1 (61.118906ms)

                                                
                                                
** stderr ** 
	E1212 00:50:19.522074  545232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1212 00:50:19.523638  545232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1212 00:50:19.525073  545232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1212 00:50:19.526554  545232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1212 00:50:19.527994  545232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-035643 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (120.74s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-035643 create deployment hello-node --image kicbase/echo-server
functional_test.go:1451: (dbg) Non-zero exit: kubectl --context functional-035643 create deployment hello-node --image kicbase/echo-server: exit status 1 (56.670076ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1453: failed to create hello-node deployment with this command "kubectl --context functional-035643 create deployment hello-node --image kicbase/echo-server": exit status 1.
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-035643 service list
functional_test.go:1469: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-035643 service list: exit status 103 (262.369173ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-035643 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-035643"

                                                
                                                
-- /stdout --
functional_test.go:1471: failed to do service list. args "out/minikube-linux-arm64 -p functional-035643 service list" : exit status 103
functional_test.go:1474: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-035643 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-035643\"\n"-
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-035643 service list -o json
functional_test.go:1499: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-035643 service list -o json: exit status 103 (267.41163ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-035643 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-035643"

                                                
                                                
-- /stdout --
functional_test.go:1501: failed to list services with json format. args "out/minikube-linux-arm64 -p functional-035643 service list -o json": exit status 103
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-035643 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-035643 service --namespace=default --https --url hello-node: exit status 103 (260.47843ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-035643 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-035643"

                                                
                                                
-- /stdout --
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-035643 service --namespace=default --https --url hello-node" : exit status 103
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-035643 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-035643 service hello-node --url --format={{.IP}}: exit status 103 (262.92505ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-035643 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-035643"

                                                
                                                
-- /stdout --
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-035643 service hello-node --url --format={{.IP}}": exit status 103
functional_test.go:1558: "* The control-plane node functional-035643 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-035643\"" is not a valid IP
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-035643 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-035643 service hello-node --url: exit status 103 (266.616561ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-035643 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-035643"

                                                
                                                
-- /stdout --
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-035643 service hello-node --url": exit status 103
functional_test.go:1575: found endpoint for hello-node: * The control-plane node functional-035643 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-035643"
functional_test.go:1579: failed to parse "* The control-plane node functional-035643 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-035643\"": parse "* The control-plane node functional-035643 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-035643\"": net/url: invalid control character in URL
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (2.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-035643 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2456160738/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765500627239159423" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2456160738/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765500627239159423" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2456160738/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765500627239159423" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2456160738/001/test-1765500627239159423
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-035643 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-035643 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (365.644521ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1212 00:50:27.605139  490954 retry.go:31] will retry after 257.756192ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-035643 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-035643 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 12 00:50 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 12 00:50 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 12 00:50 test-1765500627239159423
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-035643 ssh cat /mount-9p/test-1765500627239159423
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-035643 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:148: (dbg) Non-zero exit: kubectl --context functional-035643 replace --force -f testdata/busybox-mount-test.yaml: exit status 1 (61.39528ms)

                                                
                                                
** stderr ** 
	E1212 00:50:28.746772  546781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	error: unable to recognize "testdata/busybox-mount-test.yaml": Get "https://192.168.49.2:8441/api?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test_mount_test.go:150: failed to 'kubectl replace' for busybox-mount-test. args "kubectl --context functional-035643 replace --force -f testdata/busybox-mount-test.yaml" : exit status 1
functional_test_mount_test.go:80: "TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port" failed, getting debug info...
functional_test_mount_test.go:81: (dbg) Run:  out/minikube-linux-arm64 -p functional-035643 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates"
functional_test_mount_test.go:81: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-035643 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates": exit status 1 (288.897044ms)

                                                
                                                
-- stdout --
	192.168.49.1 on /mount-9p type 9p (rw,relatime,sync,dirsync,dfltuid=1000,dfltgid=997,access=any,msize=262144,trans=tcp,noextend,port=44739)
	total 2
	-rw-r--r-- 1 docker docker 24 Dec 12 00:50 created-by-test
	-rw-r--r-- 1 docker docker 24 Dec 12 00:50 created-by-test-removed-by-pod
	-rw-r--r-- 1 docker docker 24 Dec 12 00:50 test-1765500627239159423
	cat: /mount-9p/pod-dates: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:83: debugging command "out/minikube-linux-arm64 -p functional-035643 ssh \"mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates\"" failed : exit status 1
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-035643 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-035643 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2456160738/001:/mount-9p --alsologtostderr -v=1] ...
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-arm64 mount -p functional-035643 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2456160738/001:/mount-9p --alsologtostderr -v=1] stdout:
* Mounting host path /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2456160738/001 into VM as /mount-9p ...
- Mount type:   9p
- User ID:      docker
- Group ID:     docker
- Version:      9p2000.L
- Message Size: 262144
- Options:      map[]
- Bind Address: 192.168.49.1:44739
* Userspace file server: 
ufs starting
* Successfully mounted /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2456160738/001 to /mount-9p

                                                
                                                
* NOTE: This process must stay alive for the mount to be accessible ...
* Unmounting /mount-9p ...

                                                
                                                

                                                
                                                
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-arm64 mount -p functional-035643 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2456160738/001:/mount-9p --alsologtostderr -v=1] stderr:
I1212 00:50:27.315336  546458 out.go:360] Setting OutFile to fd 1 ...
I1212 00:50:27.317276  546458 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 00:50:27.318758  546458 out.go:374] Setting ErrFile to fd 2...
I1212 00:50:27.318837  546458 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 00:50:27.319211  546458 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-487723/.minikube/bin
I1212 00:50:27.319687  546458 mustload.go:66] Loading cluster: functional-035643
I1212 00:50:27.320217  546458 config.go:182] Loaded profile config "functional-035643": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1212 00:50:27.320838  546458 cli_runner.go:164] Run: docker container inspect functional-035643 --format={{.State.Status}}
I1212 00:50:27.348904  546458 host.go:66] Checking if "functional-035643" exists ...
I1212 00:50:27.349244  546458 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1212 00:50:27.454823  546458 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 00:50:27.443770373 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1212 00:50:27.454989  546458 cli_runner.go:164] Run: docker network inspect functional-035643 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1212 00:50:27.479561  546458 out.go:179] * Mounting host path /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2456160738/001 into VM as /mount-9p ...
I1212 00:50:27.482497  546458 out.go:179]   - Mount type:   9p
I1212 00:50:27.485313  546458 out.go:179]   - User ID:      docker
I1212 00:50:27.488868  546458 out.go:179]   - Group ID:     docker
I1212 00:50:27.491856  546458 out.go:179]   - Version:      9p2000.L
I1212 00:50:27.494818  546458 out.go:179]   - Message Size: 262144
I1212 00:50:27.497728  546458 out.go:179]   - Options:      map[]
I1212 00:50:27.500629  546458 out.go:179]   - Bind Address: 192.168.49.1:44739
I1212 00:50:27.503457  546458 out.go:179] * Userspace file server: 
I1212 00:50:27.503775  546458 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1212 00:50:27.503869  546458 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
I1212 00:50:27.532531  546458 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/functional-035643/id_rsa Username:docker}
I1212 00:50:27.637736  546458 mount.go:180] unmount for /mount-9p ran successfully
I1212 00:50:27.637763  546458 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /mount-9p"
I1212 00:50:27.646370  546458 ssh_runner.go:195] Run: /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=44739,trans=tcp,version=9p2000.L 192.168.49.1 /mount-9p"
I1212 00:50:27.657101  546458 main.go:127] stdlog: ufs.go:141 connected
I1212 00:50:27.657317  546458 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43280 Tversion tag 65535 msize 262144 version '9P2000.L'
I1212 00:50:27.657360  546458 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43280 Rversion tag 65535 msize 262144 version '9P2000'
I1212 00:50:27.657688  546458 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43280 Tattach tag 0 fid 0 afid 4294967295 uname 'nobody' nuname 0 aname ''
I1212 00:50:27.657776  546458 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43280 Rattach tag 0 aqid (c9d23f 1009a923 'd')
I1212 00:50:27.658159  546458 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43280 Tstat tag 0 fid 0
I1212 00:50:27.658270  546458 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43280 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (c9d23f 1009a923 'd') m d775 at 0 mt 1765500627 l 4096 t 0 d 0 ext )
I1212 00:50:27.661575  546458 lock.go:50] WriteFile acquiring /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/.mount-process: {Name:mk7c76bf6d44aaca144fa326cd372f6ca9d2b7ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1212 00:50:27.661778  546458 mount.go:105] mount successful: ""
I1212 00:50:27.665387  546458 out.go:179] * Successfully mounted /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2456160738/001 to /mount-9p
I1212 00:50:27.668381  546458 out.go:203] 
I1212 00:50:27.671354  546458 out.go:179] * NOTE: This process must stay alive for the mount to be accessible ...
I1212 00:50:28.403288  546458 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43280 Tstat tag 0 fid 0
I1212 00:50:28.403384  546458 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43280 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (c9d23f 1009a923 'd') m d775 at 0 mt 1765500627 l 4096 t 0 d 0 ext )
I1212 00:50:28.403755  546458 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43280 Twalk tag 0 fid 0 newfid 1 
I1212 00:50:28.403796  546458 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43280 Rwalk tag 0 
I1212 00:50:28.403932  546458 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43280 Topen tag 0 fid 1 mode 0
I1212 00:50:28.403981  546458 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43280 Ropen tag 0 qid (c9d23f 1009a923 'd') iounit 0
I1212 00:50:28.404098  546458 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43280 Tstat tag 0 fid 0
I1212 00:50:28.404134  546458 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43280 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (c9d23f 1009a923 'd') m d775 at 0 mt 1765500627 l 4096 t 0 d 0 ext )
I1212 00:50:28.404293  546458 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43280 Tread tag 0 fid 1 offset 0 count 262120
I1212 00:50:28.404421  546458 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43280 Rread tag 0 count 258
I1212 00:50:28.404564  546458 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43280 Tread tag 0 fid 1 offset 258 count 261862
I1212 00:50:28.404599  546458 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43280 Rread tag 0 count 0
I1212 00:50:28.404717  546458 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43280 Tread tag 0 fid 1 offset 258 count 262120
I1212 00:50:28.404743  546458 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43280 Rread tag 0 count 0
I1212 00:50:28.404883  546458 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43280 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1212 00:50:28.404917  546458 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43280 Rwalk tag 0 (c9d240 1009a923 '') 
I1212 00:50:28.405039  546458 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43280 Tstat tag 0 fid 2
I1212 00:50:28.405074  546458 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43280 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (c9d240 1009a923 '') m 644 at 0 mt 1765500627 l 24 t 0 d 0 ext )
I1212 00:50:28.405205  546458 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43280 Tstat tag 0 fid 2
I1212 00:50:28.405235  546458 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43280 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (c9d240 1009a923 '') m 644 at 0 mt 1765500627 l 24 t 0 d 0 ext )
I1212 00:50:28.405354  546458 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43280 Tclunk tag 0 fid 2
I1212 00:50:28.405377  546458 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43280 Rclunk tag 0
I1212 00:50:28.405514  546458 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43280 Twalk tag 0 fid 0 newfid 2 0:'test-1765500627239159423' 
I1212 00:50:28.405546  546458 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43280 Rwalk tag 0 (c9d242 1009a923 '') 
I1212 00:50:28.405652  546458 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43280 Tstat tag 0 fid 2
I1212 00:50:28.405691  546458 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43280 Rstat tag 0 st ('test-1765500627239159423' 'jenkins' 'jenkins' '' q (c9d242 1009a923 '') m 644 at 0 mt 1765500627 l 24 t 0 d 0 ext )
I1212 00:50:28.405828  546458 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43280 Tstat tag 0 fid 2
I1212 00:50:28.405860  546458 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43280 Rstat tag 0 st ('test-1765500627239159423' 'jenkins' 'jenkins' '' q (c9d242 1009a923 '') m 644 at 0 mt 1765500627 l 24 t 0 d 0 ext )
I1212 00:50:28.405969  546458 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43280 Tclunk tag 0 fid 2
I1212 00:50:28.405990  546458 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43280 Rclunk tag 0
I1212 00:50:28.406121  546458 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43280 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1212 00:50:28.406166  546458 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43280 Rwalk tag 0 (c9d241 1009a923 '') 
I1212 00:50:28.406300  546458 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43280 Tstat tag 0 fid 2
I1212 00:50:28.406340  546458 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43280 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (c9d241 1009a923 '') m 644 at 0 mt 1765500627 l 24 t 0 d 0 ext )
I1212 00:50:28.406462  546458 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43280 Tstat tag 0 fid 2
I1212 00:50:28.406497  546458 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43280 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (c9d241 1009a923 '') m 644 at 0 mt 1765500627 l 24 t 0 d 0 ext )
I1212 00:50:28.406614  546458 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43280 Tclunk tag 0 fid 2
I1212 00:50:28.406636  546458 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43280 Rclunk tag 0
I1212 00:50:28.406758  546458 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43280 Tread tag 0 fid 1 offset 258 count 262120
I1212 00:50:28.406790  546458 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43280 Rread tag 0 count 0
I1212 00:50:28.406926  546458 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43280 Tclunk tag 0 fid 1
I1212 00:50:28.406959  546458 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43280 Rclunk tag 0
I1212 00:50:28.675055  546458 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43280 Twalk tag 0 fid 0 newfid 1 0:'test-1765500627239159423' 
I1212 00:50:28.675126  546458 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43280 Rwalk tag 0 (c9d242 1009a923 '') 
I1212 00:50:28.675342  546458 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43280 Tstat tag 0 fid 1
I1212 00:50:28.675400  546458 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43280 Rstat tag 0 st ('test-1765500627239159423' 'jenkins' 'jenkins' '' q (c9d242 1009a923 '') m 644 at 0 mt 1765500627 l 24 t 0 d 0 ext )
I1212 00:50:28.675558  546458 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43280 Twalk tag 0 fid 1 newfid 2 
I1212 00:50:28.675590  546458 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43280 Rwalk tag 0 
I1212 00:50:28.675755  546458 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43280 Topen tag 0 fid 2 mode 0
I1212 00:50:28.675855  546458 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43280 Ropen tag 0 qid (c9d242 1009a923 '') iounit 0
I1212 00:50:28.676019  546458 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43280 Tstat tag 0 fid 1
I1212 00:50:28.676063  546458 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43280 Rstat tag 0 st ('test-1765500627239159423' 'jenkins' 'jenkins' '' q (c9d242 1009a923 '') m 644 at 0 mt 1765500627 l 24 t 0 d 0 ext )
I1212 00:50:28.676221  546458 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43280 Tread tag 0 fid 2 offset 0 count 262120
I1212 00:50:28.676273  546458 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43280 Rread tag 0 count 24
I1212 00:50:28.676394  546458 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43280 Tread tag 0 fid 2 offset 24 count 262120
I1212 00:50:28.676420  546458 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43280 Rread tag 0 count 0
I1212 00:50:28.676646  546458 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43280 Tread tag 0 fid 2 offset 24 count 262120
I1212 00:50:28.676697  546458 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43280 Rread tag 0 count 0
I1212 00:50:28.676916  546458 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43280 Tclunk tag 0 fid 2
I1212 00:50:28.676952  546458 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43280 Rclunk tag 0
I1212 00:50:28.677148  546458 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43280 Tclunk tag 0 fid 1
I1212 00:50:28.677175  546458 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43280 Rclunk tag 0
I1212 00:50:29.029702  546458 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43280 Tstat tag 0 fid 0
I1212 00:50:29.029773  546458 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43280 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (c9d23f 1009a923 'd') m d775 at 0 mt 1765500627 l 4096 t 0 d 0 ext )
I1212 00:50:29.030166  546458 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43280 Twalk tag 0 fid 0 newfid 1 
I1212 00:50:29.030207  546458 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43280 Rwalk tag 0 
I1212 00:50:29.030357  546458 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43280 Topen tag 0 fid 1 mode 0
I1212 00:50:29.030408  546458 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43280 Ropen tag 0 qid (c9d23f 1009a923 'd') iounit 0
I1212 00:50:29.030545  546458 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43280 Tstat tag 0 fid 0
I1212 00:50:29.030599  546458 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43280 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (c9d23f 1009a923 'd') m d775 at 0 mt 1765500627 l 4096 t 0 d 0 ext )
I1212 00:50:29.030768  546458 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43280 Tread tag 0 fid 1 offset 0 count 262120
I1212 00:50:29.030890  546458 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43280 Rread tag 0 count 258
I1212 00:50:29.031029  546458 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43280 Tread tag 0 fid 1 offset 258 count 261862
I1212 00:50:29.031059  546458 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43280 Rread tag 0 count 0
I1212 00:50:29.031175  546458 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43280 Tread tag 0 fid 1 offset 258 count 262120
I1212 00:50:29.031221  546458 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43280 Rread tag 0 count 0
I1212 00:50:29.031363  546458 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43280 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1212 00:50:29.031405  546458 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43280 Rwalk tag 0 (c9d240 1009a923 '') 
I1212 00:50:29.031575  546458 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43280 Tstat tag 0 fid 2
I1212 00:50:29.031614  546458 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43280 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (c9d240 1009a923 '') m 644 at 0 mt 1765500627 l 24 t 0 d 0 ext )
I1212 00:50:29.031786  546458 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43280 Tstat tag 0 fid 2
I1212 00:50:29.031824  546458 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43280 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (c9d240 1009a923 '') m 644 at 0 mt 1765500627 l 24 t 0 d 0 ext )
I1212 00:50:29.031953  546458 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43280 Tclunk tag 0 fid 2
I1212 00:50:29.031975  546458 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43280 Rclunk tag 0
I1212 00:50:29.032123  546458 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43280 Twalk tag 0 fid 0 newfid 2 0:'test-1765500627239159423' 
I1212 00:50:29.032156  546458 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43280 Rwalk tag 0 (c9d242 1009a923 '') 
I1212 00:50:29.032274  546458 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43280 Tstat tag 0 fid 2
I1212 00:50:29.032303  546458 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43280 Rstat tag 0 st ('test-1765500627239159423' 'jenkins' 'jenkins' '' q (c9d242 1009a923 '') m 644 at 0 mt 1765500627 l 24 t 0 d 0 ext )
I1212 00:50:29.032445  546458 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43280 Tstat tag 0 fid 2
I1212 00:50:29.032482  546458 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43280 Rstat tag 0 st ('test-1765500627239159423' 'jenkins' 'jenkins' '' q (c9d242 1009a923 '') m 644 at 0 mt 1765500627 l 24 t 0 d 0 ext )
I1212 00:50:29.032643  546458 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43280 Tclunk tag 0 fid 2
I1212 00:50:29.032663  546458 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43280 Rclunk tag 0
I1212 00:50:29.032805  546458 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43280 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1212 00:50:29.032876  546458 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43280 Rwalk tag 0 (c9d241 1009a923 '') 
I1212 00:50:29.032999  546458 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43280 Tstat tag 0 fid 2
I1212 00:50:29.033032  546458 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43280 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (c9d241 1009a923 '') m 644 at 0 mt 1765500627 l 24 t 0 d 0 ext )
I1212 00:50:29.033173  546458 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43280 Tstat tag 0 fid 2
I1212 00:50:29.033201  546458 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43280 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (c9d241 1009a923 '') m 644 at 0 mt 1765500627 l 24 t 0 d 0 ext )
I1212 00:50:29.033319  546458 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43280 Tclunk tag 0 fid 2
I1212 00:50:29.033339  546458 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43280 Rclunk tag 0
I1212 00:50:29.033477  546458 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43280 Tread tag 0 fid 1 offset 258 count 262120
I1212 00:50:29.033500  546458 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43280 Rread tag 0 count 0
I1212 00:50:29.033631  546458 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43280 Tclunk tag 0 fid 1
I1212 00:50:29.033663  546458 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43280 Rclunk tag 0
I1212 00:50:29.035050  546458 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43280 Twalk tag 0 fid 0 newfid 1 0:'pod-dates' 
I1212 00:50:29.035126  546458 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43280 Rerror tag 0 ename 'file not found' ecode 0
I1212 00:50:29.292782  546458 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43280 Tclunk tag 0 fid 0
I1212 00:50:29.292834  546458 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43280 Rclunk tag 0
I1212 00:50:29.293938  546458 main.go:127] stdlog: ufs.go:147 disconnected
I1212 00:50:29.316044  546458 out.go:179] * Unmounting /mount-9p ...
I1212 00:50:29.319042  546458 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1212 00:50:29.326224  546458 mount.go:180] unmount for /mount-9p ran successfully
I1212 00:50:29.326353  546458 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/.mount-process: {Name:mk7c76bf6d44aaca144fa326cd372f6ca9d2b7ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1212 00:50:29.329545  546458 out.go:203] 
W1212 00:50:29.332555  546458 out.go:285] X Exiting due to MK_INTERRUPTED: Received terminated signal
X Exiting due to MK_INTERRUPTED: Received terminated signal
I1212 00:50:29.335514  546458 out.go:203] 
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (2.18s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (1.87s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-042660 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p json-output-042660 --output=json --user=testUser: exit status 80 (1.870774668s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"5b773305-d2c8-4f0f-a911-a8efba1e9e12","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-042660 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"43b3a607-98c0-43bb-9f84-ec96a9d39ec6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-12T01:04:37Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"5f870ee6-7a24-45d6-a1cc-a48d57af02e7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 pause -p json-output-042660 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (1.87s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (2.33s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-042660 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 unpause -p json-output-042660 --output=json --user=testUser: exit status 80 (2.328314322s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9d1315e8-788b-4697-a2a3-008f23c890ba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-042660 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"62743462-d031-4799-87f3-089b70d7cc7b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-12T01:04:40Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"6a370676-0855-4400-af0e-2d019c07ae5b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 unpause -p json-output-042660 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (2.33s)

                                                
                                    
x
+
TestKubernetesUpgrade (782.31s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-224473 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-224473 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (34.892379973s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-224473
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-224473: (1.345914046s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-224473 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-224473 status --format={{.Host}}: exit status 7 (73.96703ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-224473 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-224473 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 109 (12m19.905213217s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-224473] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22101
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22101-487723/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-487723/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "kubernetes-upgrade-224473" primary control-plane node in "kubernetes-upgrade-224473" cluster
	* Pulling base image v0.0.48-1765275396-22083 ...
	* Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 01:22:57.390819  664006 out.go:360] Setting OutFile to fd 1 ...
	I1212 01:22:57.390958  664006 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 01:22:57.390969  664006 out.go:374] Setting ErrFile to fd 2...
	I1212 01:22:57.390973  664006 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 01:22:57.391237  664006 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-487723/.minikube/bin
	I1212 01:22:57.391625  664006 out.go:368] Setting JSON to false
	I1212 01:22:57.392562  664006 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":14723,"bootTime":1765487855,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1212 01:22:57.392643  664006 start.go:143] virtualization:  
	I1212 01:22:57.395779  664006 out.go:179] * [kubernetes-upgrade-224473] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 01:22:57.399683  664006 out.go:179]   - MINIKUBE_LOCATION=22101
	I1212 01:22:57.399757  664006 notify.go:221] Checking for updates...
	I1212 01:22:57.405361  664006 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 01:22:57.408319  664006 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22101-487723/kubeconfig
	I1212 01:22:57.411304  664006 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-487723/.minikube
	I1212 01:22:57.414032  664006 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 01:22:57.416938  664006 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 01:22:57.420248  664006 config.go:182] Loaded profile config "kubernetes-upgrade-224473": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1212 01:22:57.420864  664006 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 01:22:57.442780  664006 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 01:22:57.442922  664006 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 01:22:57.518797  664006 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 01:22:57.509569151 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 01:22:57.518903  664006 docker.go:319] overlay module found
	I1212 01:22:57.521955  664006 out.go:179] * Using the docker driver based on existing profile
	I1212 01:22:57.524699  664006 start.go:309] selected driver: docker
	I1212 01:22:57.524719  664006 start.go:927] validating driver "docker" against &{Name:kubernetes-upgrade-224473 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-224473 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 01:22:57.524820  664006 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 01:22:57.525546  664006 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 01:22:57.584985  664006 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 01:22:57.575947193 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 01:22:57.585311  664006 cni.go:84] Creating CNI manager for ""
	I1212 01:22:57.585372  664006 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 01:22:57.585404  664006 start.go:353] cluster config:
	{Name:kubernetes-upgrade-224473 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-224473 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 01:22:57.588716  664006 out.go:179] * Starting "kubernetes-upgrade-224473" primary control-plane node in "kubernetes-upgrade-224473" cluster
	I1212 01:22:57.591557  664006 cache.go:134] Beginning downloading kic base image for docker with crio
	I1212 01:22:57.594582  664006 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1212 01:22:57.597602  664006 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1212 01:22:57.597678  664006 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22101-487723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1212 01:22:57.597690  664006 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1212 01:22:57.597689  664006 cache.go:65] Caching tarball of preloaded images
	I1212 01:22:57.597812  664006 preload.go:238] Found /home/jenkins/minikube-integration/22101-487723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1212 01:22:57.597829  664006 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1212 01:22:57.597969  664006 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/kubernetes-upgrade-224473/config.json ...
	I1212 01:22:57.619134  664006 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1212 01:22:57.619159  664006 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1212 01:22:57.619188  664006 cache.go:243] Successfully downloaded all kic artifacts
	I1212 01:22:57.619220  664006 start.go:360] acquireMachinesLock for kubernetes-upgrade-224473: {Name:mk8bcbf916125ed43ba9c5cded0d561e69e3afa4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 01:22:57.619292  664006 start.go:364] duration metric: took 47.137µs to acquireMachinesLock for "kubernetes-upgrade-224473"
	I1212 01:22:57.619320  664006 start.go:96] Skipping create...Using existing machine configuration
	I1212 01:22:57.619329  664006 fix.go:54] fixHost starting: 
	I1212 01:22:57.619627  664006 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-224473 --format={{.State.Status}}
	I1212 01:22:57.637000  664006 fix.go:112] recreateIfNeeded on kubernetes-upgrade-224473: state=Stopped err=<nil>
	W1212 01:22:57.637032  664006 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 01:22:57.640330  664006 out.go:252] * Restarting existing docker container for "kubernetes-upgrade-224473" ...
	I1212 01:22:57.640421  664006 cli_runner.go:164] Run: docker start kubernetes-upgrade-224473
	I1212 01:22:57.888267  664006 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-224473 --format={{.State.Status}}
	I1212 01:22:57.917074  664006 kic.go:430] container "kubernetes-upgrade-224473" state is running.
	I1212 01:22:57.917471  664006 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-224473
	I1212 01:22:57.941896  664006 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/kubernetes-upgrade-224473/config.json ...
	I1212 01:22:57.942132  664006 machine.go:94] provisionDockerMachine start ...
	I1212 01:22:57.942197  664006 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-224473
	I1212 01:22:57.969340  664006 main.go:143] libmachine: Using SSH client type: native
	I1212 01:22:57.969685  664006 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33399 <nil> <nil>}
	I1212 01:22:57.969699  664006 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 01:22:57.970346  664006 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1212 01:23:01.122610  664006 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-224473
	
	I1212 01:23:01.122639  664006 ubuntu.go:182] provisioning hostname "kubernetes-upgrade-224473"
	I1212 01:23:01.122728  664006 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-224473
	I1212 01:23:01.141481  664006 main.go:143] libmachine: Using SSH client type: native
	I1212 01:23:01.141801  664006 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33399 <nil> <nil>}
	I1212 01:23:01.141812  664006 main.go:143] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-224473 && echo "kubernetes-upgrade-224473" | sudo tee /etc/hostname
	I1212 01:23:01.304370  664006 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-224473
	
	I1212 01:23:01.304456  664006 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-224473
	I1212 01:23:01.330766  664006 main.go:143] libmachine: Using SSH client type: native
	I1212 01:23:01.331073  664006 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33399 <nil> <nil>}
	I1212 01:23:01.331093  664006 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-224473' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-224473/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-224473' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 01:23:01.479016  664006 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 01:23:01.479055  664006 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22101-487723/.minikube CaCertPath:/home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22101-487723/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22101-487723/.minikube}
	I1212 01:23:01.479077  664006 ubuntu.go:190] setting up certificates
	I1212 01:23:01.479094  664006 provision.go:84] configureAuth start
	I1212 01:23:01.479159  664006 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-224473
	I1212 01:23:01.496664  664006 provision.go:143] copyHostCerts
	I1212 01:23:01.496744  664006 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-487723/.minikube/ca.pem, removing ...
	I1212 01:23:01.496757  664006 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-487723/.minikube/ca.pem
	I1212 01:23:01.496834  664006 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22101-487723/.minikube/ca.pem (1078 bytes)
	I1212 01:23:01.496943  664006 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-487723/.minikube/cert.pem, removing ...
	I1212 01:23:01.496953  664006 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-487723/.minikube/cert.pem
	I1212 01:23:01.496988  664006 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22101-487723/.minikube/cert.pem (1123 bytes)
	I1212 01:23:01.497100  664006 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-487723/.minikube/key.pem, removing ...
	I1212 01:23:01.497110  664006 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-487723/.minikube/key.pem
	I1212 01:23:01.497137  664006 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22101-487723/.minikube/key.pem (1679 bytes)
	I1212 01:23:01.497200  664006 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22101-487723/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-224473 san=[127.0.0.1 192.168.85.2 kubernetes-upgrade-224473 localhost minikube]
	I1212 01:23:01.646548  664006 provision.go:177] copyRemoteCerts
	I1212 01:23:01.646621  664006 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 01:23:01.646660  664006 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-224473
	I1212 01:23:01.664464  664006 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33399 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/kubernetes-upgrade-224473/id_rsa Username:docker}
	I1212 01:23:01.770408  664006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 01:23:01.788333  664006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1212 01:23:01.806441  664006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 01:23:01.824352  664006 provision.go:87] duration metric: took 345.232145ms to configureAuth
	I1212 01:23:01.824388  664006 ubuntu.go:206] setting minikube options for container-runtime
	I1212 01:23:01.824598  664006 config.go:182] Loaded profile config "kubernetes-upgrade-224473": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 01:23:01.824716  664006 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-224473
	I1212 01:23:01.842107  664006 main.go:143] libmachine: Using SSH client type: native
	I1212 01:23:01.842429  664006 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33399 <nil> <nil>}
	I1212 01:23:01.842443  664006 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 01:23:02.193064  664006 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 01:23:02.193096  664006 machine.go:97] duration metric: took 4.250936467s to provisionDockerMachine
	I1212 01:23:02.193109  664006 start.go:293] postStartSetup for "kubernetes-upgrade-224473" (driver="docker")
	I1212 01:23:02.193121  664006 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 01:23:02.193188  664006 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 01:23:02.193231  664006 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-224473
	I1212 01:23:02.210496  664006 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33399 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/kubernetes-upgrade-224473/id_rsa Username:docker}
	I1212 01:23:02.315010  664006 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 01:23:02.318340  664006 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 01:23:02.318369  664006 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 01:23:02.318380  664006 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-487723/.minikube/addons for local assets ...
	I1212 01:23:02.318435  664006 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-487723/.minikube/files for local assets ...
	I1212 01:23:02.318523  664006 filesync.go:149] local asset: /home/jenkins/minikube-integration/22101-487723/.minikube/files/etc/ssl/certs/4909542.pem -> 4909542.pem in /etc/ssl/certs
	I1212 01:23:02.318647  664006 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 01:23:02.326218  664006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/files/etc/ssl/certs/4909542.pem --> /etc/ssl/certs/4909542.pem (1708 bytes)
	I1212 01:23:02.344115  664006 start.go:296] duration metric: took 150.99201ms for postStartSetup
	I1212 01:23:02.344192  664006 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 01:23:02.344238  664006 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-224473
	I1212 01:23:02.362543  664006 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33399 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/kubernetes-upgrade-224473/id_rsa Username:docker}
	I1212 01:23:02.463637  664006 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 01:23:02.468362  664006 fix.go:56] duration metric: took 4.849027072s for fixHost
	I1212 01:23:02.468388  664006 start.go:83] releasing machines lock for "kubernetes-upgrade-224473", held for 4.84908193s
	I1212 01:23:02.468456  664006 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-224473
	I1212 01:23:02.485480  664006 ssh_runner.go:195] Run: cat /version.json
	I1212 01:23:02.485536  664006 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-224473
	I1212 01:23:02.485777  664006 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 01:23:02.485826  664006 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-224473
	I1212 01:23:02.504532  664006 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33399 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/kubernetes-upgrade-224473/id_rsa Username:docker}
	I1212 01:23:02.516648  664006 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33399 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/kubernetes-upgrade-224473/id_rsa Username:docker}
	I1212 01:23:02.606189  664006 ssh_runner.go:195] Run: systemctl --version
	I1212 01:23:02.702359  664006 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 01:23:02.745203  664006 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 01:23:02.749525  664006 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 01:23:02.749610  664006 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 01:23:02.757455  664006 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 01:23:02.757479  664006 start.go:496] detecting cgroup driver to use...
	I1212 01:23:02.757521  664006 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 01:23:02.757575  664006 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 01:23:02.773029  664006 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 01:23:02.786552  664006 docker.go:218] disabling cri-docker service (if available) ...
	I1212 01:23:02.786638  664006 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 01:23:02.802701  664006 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 01:23:02.816041  664006 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 01:23:02.934260  664006 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 01:23:03.048394  664006 docker.go:234] disabling docker service ...
	I1212 01:23:03.048529  664006 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 01:23:03.065036  664006 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 01:23:03.078326  664006 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 01:23:03.199008  664006 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 01:23:03.316056  664006 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 01:23:03.329228  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 01:23:03.344315  664006 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1212 01:23:03.344403  664006 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:23:03.353305  664006 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 01:23:03.353375  664006 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:23:03.362261  664006 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:23:03.372157  664006 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:23:03.381165  664006 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 01:23:03.389284  664006 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:23:03.397977  664006 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:23:03.406820  664006 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:23:03.415846  664006 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 01:23:03.423149  664006 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 01:23:03.430236  664006 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:23:03.572126  664006 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 01:23:03.749340  664006 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 01:23:03.749480  664006 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 01:23:03.753279  664006 start.go:564] Will wait 60s for crictl version
	I1212 01:23:03.753395  664006 ssh_runner.go:195] Run: which crictl
	I1212 01:23:03.756983  664006 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 01:23:03.781582  664006 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1212 01:23:03.781678  664006 ssh_runner.go:195] Run: crio --version
	I1212 01:23:03.818170  664006 ssh_runner.go:195] Run: crio --version
	I1212 01:23:03.850659  664006 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1212 01:23:03.853610  664006 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-224473 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 01:23:03.870093  664006 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1212 01:23:03.874036  664006 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 01:23:03.883850  664006 kubeadm.go:884] updating cluster {Name:kubernetes-upgrade-224473 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-224473 Namespace:default APIServerHAVIP: APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 01:23:03.883963  664006 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1212 01:23:03.884035  664006 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 01:23:03.915914  664006 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1212 01:23:03.915985  664006 ssh_runner.go:195] Run: which lz4
	I1212 01:23:03.919694  664006 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1212 01:23:03.923210  664006 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 01:23:03.923246  664006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 --> /preloaded.tar.lz4 (306100841 bytes)
	I1212 01:23:05.357187  664006 crio.go:462] duration metric: took 1.437538773s to copy over tarball
	I1212 01:23:05.357282  664006 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 01:23:07.810320  664006 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.453006994s)
	I1212 01:23:07.810344  664006 crio.go:469] duration metric: took 2.453111427s to extract the tarball
	I1212 01:23:07.810352  664006 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1212 01:23:07.852916  664006 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 01:23:07.908223  664006 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 01:23:07.908248  664006 cache_images.go:86] Images are preloaded, skipping loading
	I1212 01:23:07.908256  664006 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 crio true true} ...
	I1212 01:23:07.908358  664006 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kubernetes-upgrade-224473 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-224473 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 01:23:07.908440  664006 ssh_runner.go:195] Run: crio config
	I1212 01:23:07.985494  664006 cni.go:84] Creating CNI manager for ""
	I1212 01:23:07.985518  664006 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 01:23:07.985541  664006 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 01:23:07.985598  664006 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-224473 NodeName:kubernetes-upgrade-224473 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca
.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 01:23:07.985800  664006 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-224473"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 01:23:07.985897  664006 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 01:23:07.994797  664006 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 01:23:07.994908  664006 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 01:23:08.003475  664006 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I1212 01:23:08.019774  664006 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1212 01:23:08.035715  664006 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2229 bytes)
	I1212 01:23:08.050064  664006 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1212 01:23:08.055455  664006 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 01:23:08.069735  664006 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:23:08.198982  664006 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 01:23:08.215350  664006 certs.go:69] Setting up /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/kubernetes-upgrade-224473 for IP: 192.168.85.2
	I1212 01:23:08.215411  664006 certs.go:195] generating shared ca certs ...
	I1212 01:23:08.215440  664006 certs.go:227] acquiring lock for ca certs: {Name:mk856824cf2126fa3d2975ef18e195b6ab1234f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:23:08.215612  664006 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22101-487723/.minikube/ca.key
	I1212 01:23:08.215680  664006 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22101-487723/.minikube/proxy-client-ca.key
	I1212 01:23:08.215701  664006 certs.go:257] generating profile certs ...
	I1212 01:23:08.215816  664006 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/kubernetes-upgrade-224473/client.key
	I1212 01:23:08.215926  664006 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/kubernetes-upgrade-224473/apiserver.key.bdcb0b42
	I1212 01:23:08.215993  664006 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/kubernetes-upgrade-224473/proxy-client.key
	I1212 01:23:08.216135  664006 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/490954.pem (1338 bytes)
	W1212 01:23:08.216193  664006 certs.go:480] ignoring /home/jenkins/minikube-integration/22101-487723/.minikube/certs/490954_empty.pem, impossibly tiny 0 bytes
	I1212 01:23:08.216217  664006 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca-key.pem (1679 bytes)
	I1212 01:23:08.216280  664006 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca.pem (1078 bytes)
	I1212 01:23:08.216331  664006 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/cert.pem (1123 bytes)
	I1212 01:23:08.216387  664006 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/key.pem (1679 bytes)
	I1212 01:23:08.216459  664006 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-487723/.minikube/files/etc/ssl/certs/4909542.pem (1708 bytes)
	I1212 01:23:08.217166  664006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 01:23:08.244214  664006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 01:23:08.267965  664006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 01:23:08.293028  664006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 01:23:08.314053  664006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/kubernetes-upgrade-224473/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1212 01:23:08.332400  664006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/kubernetes-upgrade-224473/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 01:23:08.351613  664006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/kubernetes-upgrade-224473/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 01:23:08.373692  664006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/kubernetes-upgrade-224473/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 01:23:08.409791  664006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/certs/490954.pem --> /usr/share/ca-certificates/490954.pem (1338 bytes)
	I1212 01:23:08.429614  664006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/files/etc/ssl/certs/4909542.pem --> /usr/share/ca-certificates/4909542.pem (1708 bytes)
	I1212 01:23:08.448857  664006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 01:23:08.469290  664006 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 01:23:08.483274  664006 ssh_runner.go:195] Run: openssl version
	I1212 01:23:08.493904  664006 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/490954.pem
	I1212 01:23:08.507093  664006 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/490954.pem /etc/ssl/certs/490954.pem
	I1212 01:23:08.515011  664006 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/490954.pem
	I1212 01:23:08.519137  664006 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 00:21 /usr/share/ca-certificates/490954.pem
	I1212 01:23:08.519207  664006 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/490954.pem
	I1212 01:23:08.566338  664006 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 01:23:08.574521  664006 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4909542.pem
	I1212 01:23:08.582047  664006 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4909542.pem /etc/ssl/certs/4909542.pem
	I1212 01:23:08.589442  664006 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4909542.pem
	I1212 01:23:08.593329  664006 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 00:21 /usr/share/ca-certificates/4909542.pem
	I1212 01:23:08.593461  664006 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4909542.pem
	I1212 01:23:08.635413  664006 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 01:23:08.643347  664006 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:23:08.650603  664006 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 01:23:08.660181  664006 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:23:08.665103  664006 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 00:11 /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:23:08.665217  664006 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:23:08.708008  664006 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 01:23:08.715387  664006 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 01:23:08.719579  664006 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 01:23:08.763449  664006 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 01:23:08.806521  664006 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 01:23:08.847802  664006 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 01:23:08.892412  664006 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 01:23:08.934421  664006 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 01:23:08.976374  664006 kubeadm.go:401] StartCluster: {Name:kubernetes-upgrade-224473 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-224473 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 01:23:08.976463  664006 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 01:23:08.976552  664006 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 01:23:09.011388  664006 cri.go:89] found id: ""
	I1212 01:23:09.011517  664006 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 01:23:09.019919  664006 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1212 01:23:09.019994  664006 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1212 01:23:09.020066  664006 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 01:23:09.028124  664006 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 01:23:09.028756  664006 kubeconfig.go:47] verify endpoint returned: get endpoint: "kubernetes-upgrade-224473" does not appear in /home/jenkins/minikube-integration/22101-487723/kubeconfig
	I1212 01:23:09.029083  664006 kubeconfig.go:62] /home/jenkins/minikube-integration/22101-487723/kubeconfig needs updating (will repair): [kubeconfig missing "kubernetes-upgrade-224473" cluster setting kubeconfig missing "kubernetes-upgrade-224473" context setting]
	I1212 01:23:09.029572  664006 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-487723/kubeconfig: {Name:mk40d877648a1b47389942ad828ec218ac64f642 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:23:09.030283  664006 kapi.go:59] client config for kubernetes-upgrade-224473: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22101-487723/.minikube/profiles/kubernetes-upgrade-224473/client.crt", KeyFile:"/home/jenkins/minikube-integration/22101-487723/.minikube/profiles/kubernetes-upgrade-224473/client.key", CAFile:"/home/jenkins/minikube-integration/22101-487723/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 01:23:09.031099  664006 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1212 01:23:09.031199  664006 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1212 01:23:09.031221  664006 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1212 01:23:09.031247  664006 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1212 01:23:09.031276  664006 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1212 01:23:09.031588  664006 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 01:23:09.042550  664006 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-12 01:22:36.311049277 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-12 01:23:08.046220173 +0000
	@@ -1,4 +1,4 @@
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: InitConfiguration
	 localAPIEndpoint:
	   advertiseAddress: 192.168.85.2
	@@ -14,31 +14,34 @@
	   criSocket: unix:///var/run/crio/crio.sock
	   name: "kubernetes-upgrade-224473"
	   kubeletExtraArgs:
	-    node-ip: 192.168.85.2
	+    - name: "node-ip"
	+      value: "192.168.85.2"
	   taints: []
	 ---
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: ClusterConfiguration
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    - name: "enable-admission-plugins"
	+      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	   extraArgs:
	-    allocate-node-cidrs: "true"
	-    leader-elect: "false"
	+    - name: "allocate-node-cidrs"
	+      value: "true"
	+    - name: "leader-elect"
	+      value: "false"
	 scheduler:
	   extraArgs:
	-    leader-elect: "false"
	+    - name: "leader-elect"
	+      value: "false"
	 certificatesDir: /var/lib/minikube/certs
	 clusterName: mk
	 controlPlaneEndpoint: control-plane.minikube.internal:8443
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	-    extraArgs:
	-      proxy-refresh-interval: "70000"
	-kubernetesVersion: v1.28.0
	+kubernetesVersion: v1.35.0-beta.0
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	
	-- /stdout --
	I1212 01:23:09.042618  664006 kubeadm.go:1161] stopping kube-system containers ...
	I1212 01:23:09.042650  664006 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1212 01:23:09.042748  664006 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 01:23:09.069624  664006 cri.go:89] found id: ""
	I1212 01:23:09.069692  664006 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 01:23:09.083050  664006 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 01:23:09.091008  664006 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5643 Dec 12 01:22 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Dec 12 01:22 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2039 Dec 12 01:22 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Dec 12 01:22 /etc/kubernetes/scheduler.conf
	
	I1212 01:23:09.091096  664006 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 01:23:09.099064  664006 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 01:23:09.106882  664006 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 01:23:09.114488  664006 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1212 01:23:09.114594  664006 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 01:23:09.121855  664006 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 01:23:09.129534  664006 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1212 01:23:09.129648  664006 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 01:23:09.136873  664006 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 01:23:09.144370  664006 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:23:09.194366  664006 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:23:10.359802  664006 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.165402617s)
	I1212 01:23:10.359933  664006 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:23:10.583273  664006 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:23:10.647140  664006 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:23:10.689943  664006 api_server.go:52] waiting for apiserver process to appear ...
	I1212 01:23:10.690032  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:11.190797  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:11.690795  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:12.190586  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:12.690195  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:13.190764  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:13.690170  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:14.190795  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:14.691117  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:15.190495  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:15.691701  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:16.191085  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:16.691164  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:17.190513  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:17.690916  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:18.190186  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:18.690161  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:19.191110  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:19.690866  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:20.190832  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:20.690871  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:21.191071  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:21.690551  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:22.190151  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:22.690542  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:23.190277  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:23.690157  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:24.190130  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:24.690296  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:25.190104  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:25.690196  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:26.190112  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:26.690944  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:27.190161  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:27.690197  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:28.191096  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:28.690921  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:29.190190  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:29.690869  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:30.190851  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:30.690181  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:31.190131  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:31.690537  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:32.191136  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:32.690157  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:33.190908  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:33.690698  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:34.191027  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:34.690890  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:35.190530  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:35.690169  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:36.190232  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:36.690802  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:37.190986  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:37.690927  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:38.190743  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:38.690722  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:39.190475  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:39.690159  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:40.191060  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:40.690788  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:41.190113  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:41.691015  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:42.191117  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:42.691143  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:43.191763  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:43.690467  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:44.190891  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:44.690192  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:45.190305  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:45.690251  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:46.190161  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:46.691004  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:47.190147  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:47.691151  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:48.190994  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:48.690745  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:49.190403  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:49.690544  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:50.190673  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:50.690153  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:51.190809  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:51.690862  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:52.190853  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:52.690519  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:53.190885  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:53.690880  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:54.190797  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:54.690780  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:55.190915  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:55.690897  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:56.191077  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:56.690176  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:57.190718  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:57.690900  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:58.190970  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:58.690152  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:59.190959  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:59.690156  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:24:00.190563  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:24:00.690816  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:24:01.190972  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:24:01.690266  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:24:02.190151  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:24:02.690161  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:24:03.190098  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:24:03.690834  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:24:04.190958  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:24:04.690565  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:24:05.190428  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:24:05.690249  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:24:06.191064  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:24:06.690628  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:24:07.190157  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:24:07.690122  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:24:08.190221  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:24:08.690971  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:24:09.191112  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:24:09.690834  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:24:10.190999  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:24:10.691087  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:24:10.691184  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:24:10.775343  664006 cri.go:89] found id: ""
	I1212 01:24:10.775363  664006 logs.go:282] 0 containers: []
	W1212 01:24:10.775371  664006 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:24:10.775377  664006 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:24:10.775434  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:24:10.807995  664006 cri.go:89] found id: ""
	I1212 01:24:10.808017  664006 logs.go:282] 0 containers: []
	W1212 01:24:10.808026  664006 logs.go:284] No container was found matching "etcd"
	I1212 01:24:10.808031  664006 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:24:10.808089  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:24:10.852997  664006 cri.go:89] found id: ""
	I1212 01:24:10.853020  664006 logs.go:282] 0 containers: []
	W1212 01:24:10.853029  664006 logs.go:284] No container was found matching "coredns"
	I1212 01:24:10.853035  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:24:10.853095  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:24:10.896633  664006 cri.go:89] found id: ""
	I1212 01:24:10.896656  664006 logs.go:282] 0 containers: []
	W1212 01:24:10.896665  664006 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:24:10.896670  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:24:10.896732  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:24:10.930856  664006 cri.go:89] found id: ""
	I1212 01:24:10.930878  664006 logs.go:282] 0 containers: []
	W1212 01:24:10.930887  664006 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:24:10.930892  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:24:10.930952  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:24:10.969718  664006 cri.go:89] found id: ""
	I1212 01:24:10.969740  664006 logs.go:282] 0 containers: []
	W1212 01:24:10.969749  664006 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:24:10.969756  664006 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:24:10.969816  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:24:11.022160  664006 cri.go:89] found id: ""
	I1212 01:24:11.022183  664006 logs.go:282] 0 containers: []
	W1212 01:24:11.022192  664006 logs.go:284] No container was found matching "kindnet"
	I1212 01:24:11.022198  664006 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:24:11.022261  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:24:11.064949  664006 cri.go:89] found id: ""
	I1212 01:24:11.064971  664006 logs.go:282] 0 containers: []
	W1212 01:24:11.064979  664006 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:24:11.064988  664006 logs.go:123] Gathering logs for kubelet ...
	I1212 01:24:11.065000  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:24:11.163454  664006 logs.go:123] Gathering logs for dmesg ...
	I1212 01:24:11.163533  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:24:11.183520  664006 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:24:11.183548  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:24:11.590861  664006 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:24:11.590882  664006 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:24:11.590895  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:24:11.630961  664006 logs.go:123] Gathering logs for container status ...
	I1212 01:24:11.631045  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:24:14.173872  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:24:14.183808  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:24:14.183876  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:24:14.209524  664006 cri.go:89] found id: ""
	I1212 01:24:14.209548  664006 logs.go:282] 0 containers: []
	W1212 01:24:14.209556  664006 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:24:14.209562  664006 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:24:14.209627  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:24:14.235159  664006 cri.go:89] found id: ""
	I1212 01:24:14.235183  664006 logs.go:282] 0 containers: []
	W1212 01:24:14.235192  664006 logs.go:284] No container was found matching "etcd"
	I1212 01:24:14.235198  664006 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:24:14.235263  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:24:14.261423  664006 cri.go:89] found id: ""
	I1212 01:24:14.261452  664006 logs.go:282] 0 containers: []
	W1212 01:24:14.261461  664006 logs.go:284] No container was found matching "coredns"
	I1212 01:24:14.261468  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:24:14.261528  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:24:14.290121  664006 cri.go:89] found id: ""
	I1212 01:24:14.290143  664006 logs.go:282] 0 containers: []
	W1212 01:24:14.290152  664006 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:24:14.290158  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:24:14.290216  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:24:14.317865  664006 cri.go:89] found id: ""
	I1212 01:24:14.317891  664006 logs.go:282] 0 containers: []
	W1212 01:24:14.317900  664006 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:24:14.317921  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:24:14.317986  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:24:14.344457  664006 cri.go:89] found id: ""
	I1212 01:24:14.344483  664006 logs.go:282] 0 containers: []
	W1212 01:24:14.344500  664006 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:24:14.344506  664006 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:24:14.344567  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:24:14.373614  664006 cri.go:89] found id: ""
	I1212 01:24:14.373638  664006 logs.go:282] 0 containers: []
	W1212 01:24:14.373647  664006 logs.go:284] No container was found matching "kindnet"
	I1212 01:24:14.373653  664006 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:24:14.373714  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:24:14.398905  664006 cri.go:89] found id: ""
	I1212 01:24:14.398934  664006 logs.go:282] 0 containers: []
	W1212 01:24:14.398944  664006 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:24:14.398954  664006 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:24:14.398967  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:24:14.464145  664006 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:24:14.464165  664006 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:24:14.464177  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:24:14.496363  664006 logs.go:123] Gathering logs for container status ...
	I1212 01:24:14.496404  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:24:14.545727  664006 logs.go:123] Gathering logs for kubelet ...
	I1212 01:24:14.545804  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:24:14.623668  664006 logs.go:123] Gathering logs for dmesg ...
	I1212 01:24:14.623755  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:24:17.146801  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:24:17.156978  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:24:17.157050  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:24:17.206107  664006 cri.go:89] found id: ""
	I1212 01:24:17.206129  664006 logs.go:282] 0 containers: []
	W1212 01:24:17.206138  664006 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:24:17.206144  664006 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:24:17.206202  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:24:17.241478  664006 cri.go:89] found id: ""
	I1212 01:24:17.241501  664006 logs.go:282] 0 containers: []
	W1212 01:24:17.241510  664006 logs.go:284] No container was found matching "etcd"
	I1212 01:24:17.241516  664006 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:24:17.241571  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:24:17.273193  664006 cri.go:89] found id: ""
	I1212 01:24:17.273217  664006 logs.go:282] 0 containers: []
	W1212 01:24:17.273226  664006 logs.go:284] No container was found matching "coredns"
	I1212 01:24:17.273232  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:24:17.273298  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:24:17.309016  664006 cri.go:89] found id: ""
	I1212 01:24:17.309041  664006 logs.go:282] 0 containers: []
	W1212 01:24:17.309050  664006 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:24:17.309056  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:24:17.309113  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:24:17.345916  664006 cri.go:89] found id: ""
	I1212 01:24:17.345940  664006 logs.go:282] 0 containers: []
	W1212 01:24:17.345948  664006 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:24:17.345954  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:24:17.346012  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:24:17.378170  664006 cri.go:89] found id: ""
	I1212 01:24:17.378194  664006 logs.go:282] 0 containers: []
	W1212 01:24:17.378202  664006 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:24:17.378209  664006 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:24:17.378266  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:24:17.424977  664006 cri.go:89] found id: ""
	I1212 01:24:17.425003  664006 logs.go:282] 0 containers: []
	W1212 01:24:17.425012  664006 logs.go:284] No container was found matching "kindnet"
	I1212 01:24:17.425018  664006 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:24:17.425076  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:24:17.477040  664006 cri.go:89] found id: ""
	I1212 01:24:17.477063  664006 logs.go:282] 0 containers: []
	W1212 01:24:17.477071  664006 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:24:17.477081  664006 logs.go:123] Gathering logs for kubelet ...
	I1212 01:24:17.477092  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:24:17.557171  664006 logs.go:123] Gathering logs for dmesg ...
	I1212 01:24:17.557211  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:24:17.575240  664006 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:24:17.575268  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:24:17.660085  664006 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:24:17.660107  664006 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:24:17.660120  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:24:17.695418  664006 logs.go:123] Gathering logs for container status ...
	I1212 01:24:17.695451  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:24:20.247357  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:24:20.258641  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:24:20.258717  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:24:20.290242  664006 cri.go:89] found id: ""
	I1212 01:24:20.290262  664006 logs.go:282] 0 containers: []
	W1212 01:24:20.290271  664006 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:24:20.290277  664006 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:24:20.290345  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:24:20.336129  664006 cri.go:89] found id: ""
	I1212 01:24:20.336152  664006 logs.go:282] 0 containers: []
	W1212 01:24:20.336167  664006 logs.go:284] No container was found matching "etcd"
	I1212 01:24:20.336173  664006 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:24:20.336240  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:24:20.371607  664006 cri.go:89] found id: ""
	I1212 01:24:20.371640  664006 logs.go:282] 0 containers: []
	W1212 01:24:20.371649  664006 logs.go:284] No container was found matching "coredns"
	I1212 01:24:20.371660  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:24:20.371740  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:24:20.409244  664006 cri.go:89] found id: ""
	I1212 01:24:20.409270  664006 logs.go:282] 0 containers: []
	W1212 01:24:20.409281  664006 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:24:20.409287  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:24:20.409366  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:24:20.445240  664006 cri.go:89] found id: ""
	I1212 01:24:20.445324  664006 logs.go:282] 0 containers: []
	W1212 01:24:20.445347  664006 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:24:20.445366  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:24:20.445483  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:24:20.478365  664006 cri.go:89] found id: ""
	I1212 01:24:20.478386  664006 logs.go:282] 0 containers: []
	W1212 01:24:20.478400  664006 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:24:20.478408  664006 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:24:20.478478  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:24:20.526145  664006 cri.go:89] found id: ""
	I1212 01:24:20.526173  664006 logs.go:282] 0 containers: []
	W1212 01:24:20.526182  664006 logs.go:284] No container was found matching "kindnet"
	I1212 01:24:20.526198  664006 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:24:20.526281  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:24:20.565067  664006 cri.go:89] found id: ""
	I1212 01:24:20.565088  664006 logs.go:282] 0 containers: []
	W1212 01:24:20.565096  664006 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:24:20.565105  664006 logs.go:123] Gathering logs for kubelet ...
	I1212 01:24:20.565120  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:24:20.642807  664006 logs.go:123] Gathering logs for dmesg ...
	I1212 01:24:20.642889  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:24:20.659468  664006 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:24:20.659498  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:24:20.796007  664006 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:24:20.796122  664006 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:24:20.796162  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:24:20.843505  664006 logs.go:123] Gathering logs for container status ...
	I1212 01:24:20.843538  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:24:23.386793  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:24:23.398921  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:24:23.398992  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:24:23.440864  664006 cri.go:89] found id: ""
	I1212 01:24:23.440889  664006 logs.go:282] 0 containers: []
	W1212 01:24:23.440898  664006 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:24:23.440904  664006 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:24:23.440963  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:24:23.481205  664006 cri.go:89] found id: ""
	I1212 01:24:23.481231  664006 logs.go:282] 0 containers: []
	W1212 01:24:23.481239  664006 logs.go:284] No container was found matching "etcd"
	I1212 01:24:23.481246  664006 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:24:23.481304  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:24:23.544949  664006 cri.go:89] found id: ""
	I1212 01:24:23.544974  664006 logs.go:282] 0 containers: []
	W1212 01:24:23.544983  664006 logs.go:284] No container was found matching "coredns"
	I1212 01:24:23.544988  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:24:23.545046  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:24:23.601270  664006 cri.go:89] found id: ""
	I1212 01:24:23.601295  664006 logs.go:282] 0 containers: []
	W1212 01:24:23.601304  664006 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:24:23.601310  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:24:23.601368  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:24:23.647455  664006 cri.go:89] found id: ""
	I1212 01:24:23.647482  664006 logs.go:282] 0 containers: []
	W1212 01:24:23.647491  664006 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:24:23.647497  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:24:23.647556  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:24:23.692832  664006 cri.go:89] found id: ""
	I1212 01:24:23.692855  664006 logs.go:282] 0 containers: []
	W1212 01:24:23.692864  664006 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:24:23.692870  664006 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:24:23.692937  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:24:23.751224  664006 cri.go:89] found id: ""
	I1212 01:24:23.751250  664006 logs.go:282] 0 containers: []
	W1212 01:24:23.751258  664006 logs.go:284] No container was found matching "kindnet"
	I1212 01:24:23.751265  664006 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:24:23.751324  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:24:23.789276  664006 cri.go:89] found id: ""
	I1212 01:24:23.789301  664006 logs.go:282] 0 containers: []
	W1212 01:24:23.789310  664006 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:24:23.789319  664006 logs.go:123] Gathering logs for dmesg ...
	I1212 01:24:23.789331  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:24:23.814132  664006 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:24:23.814163  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:24:23.939726  664006 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:24:23.939747  664006 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:24:23.939761  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:24:23.983274  664006 logs.go:123] Gathering logs for container status ...
	I1212 01:24:23.983363  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:24:24.029259  664006 logs.go:123] Gathering logs for kubelet ...
	I1212 01:24:24.029350  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:24:26.611845  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:24:26.627780  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:24:26.627850  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:24:26.689509  664006 cri.go:89] found id: ""
	I1212 01:24:26.689532  664006 logs.go:282] 0 containers: []
	W1212 01:24:26.689540  664006 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:24:26.689546  664006 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:24:26.689606  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:24:26.732188  664006 cri.go:89] found id: ""
	I1212 01:24:26.732209  664006 logs.go:282] 0 containers: []
	W1212 01:24:26.732218  664006 logs.go:284] No container was found matching "etcd"
	I1212 01:24:26.732223  664006 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:24:26.732291  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:24:26.759850  664006 cri.go:89] found id: ""
	I1212 01:24:26.759869  664006 logs.go:282] 0 containers: []
	W1212 01:24:26.759877  664006 logs.go:284] No container was found matching "coredns"
	I1212 01:24:26.759883  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:24:26.759942  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:24:26.799157  664006 cri.go:89] found id: ""
	I1212 01:24:26.799178  664006 logs.go:282] 0 containers: []
	W1212 01:24:26.799187  664006 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:24:26.799193  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:24:26.799251  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:24:26.830057  664006 cri.go:89] found id: ""
	I1212 01:24:26.830080  664006 logs.go:282] 0 containers: []
	W1212 01:24:26.830088  664006 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:24:26.830094  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:24:26.830153  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:24:26.864591  664006 cri.go:89] found id: ""
	I1212 01:24:26.864613  664006 logs.go:282] 0 containers: []
	W1212 01:24:26.864621  664006 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:24:26.864627  664006 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:24:26.864684  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:24:26.904962  664006 cri.go:89] found id: ""
	I1212 01:24:26.904995  664006 logs.go:282] 0 containers: []
	W1212 01:24:26.905003  664006 logs.go:284] No container was found matching "kindnet"
	I1212 01:24:26.905009  664006 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:24:26.905068  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:24:26.941052  664006 cri.go:89] found id: ""
	I1212 01:24:26.941074  664006 logs.go:282] 0 containers: []
	W1212 01:24:26.941082  664006 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:24:26.941091  664006 logs.go:123] Gathering logs for kubelet ...
	I1212 01:24:26.941102  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:24:27.018653  664006 logs.go:123] Gathering logs for dmesg ...
	I1212 01:24:27.018742  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:24:27.034609  664006 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:24:27.034742  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:24:27.137816  664006 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:24:27.137879  664006 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:24:27.137905  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:24:27.173662  664006 logs.go:123] Gathering logs for container status ...
	I1212 01:24:27.173699  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:24:29.719014  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:24:29.729493  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:24:29.729579  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:24:29.754849  664006 cri.go:89] found id: ""
	I1212 01:24:29.754873  664006 logs.go:282] 0 containers: []
	W1212 01:24:29.754881  664006 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:24:29.754888  664006 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:24:29.754950  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:24:29.781361  664006 cri.go:89] found id: ""
	I1212 01:24:29.781384  664006 logs.go:282] 0 containers: []
	W1212 01:24:29.781392  664006 logs.go:284] No container was found matching "etcd"
	I1212 01:24:29.781398  664006 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:24:29.781459  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:24:29.810052  664006 cri.go:89] found id: ""
	I1212 01:24:29.810075  664006 logs.go:282] 0 containers: []
	W1212 01:24:29.810084  664006 logs.go:284] No container was found matching "coredns"
	I1212 01:24:29.810090  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:24:29.810152  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:24:29.836146  664006 cri.go:89] found id: ""
	I1212 01:24:29.836224  664006 logs.go:282] 0 containers: []
	W1212 01:24:29.836249  664006 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:24:29.836263  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:24:29.836341  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:24:29.862547  664006 cri.go:89] found id: ""
	I1212 01:24:29.862572  664006 logs.go:282] 0 containers: []
	W1212 01:24:29.862581  664006 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:24:29.862587  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:24:29.862648  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:24:29.889541  664006 cri.go:89] found id: ""
	I1212 01:24:29.889567  664006 logs.go:282] 0 containers: []
	W1212 01:24:29.889576  664006 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:24:29.889583  664006 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:24:29.889643  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:24:29.914944  664006 cri.go:89] found id: ""
	I1212 01:24:29.914973  664006 logs.go:282] 0 containers: []
	W1212 01:24:29.914984  664006 logs.go:284] No container was found matching "kindnet"
	I1212 01:24:29.914990  664006 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:24:29.915049  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:24:29.941373  664006 cri.go:89] found id: ""
	I1212 01:24:29.941398  664006 logs.go:282] 0 containers: []
	W1212 01:24:29.941407  664006 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:24:29.941415  664006 logs.go:123] Gathering logs for dmesg ...
	I1212 01:24:29.941427  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:24:29.957827  664006 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:24:29.957856  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:24:30.079756  664006 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:24:30.079842  664006 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:24:30.079870  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:24:30.115065  664006 logs.go:123] Gathering logs for container status ...
	I1212 01:24:30.115101  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:24:30.153995  664006 logs.go:123] Gathering logs for kubelet ...
	I1212 01:24:30.154021  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:24:32.735923  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:24:32.746432  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:24:32.746499  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:24:32.773415  664006 cri.go:89] found id: ""
	I1212 01:24:32.773440  664006 logs.go:282] 0 containers: []
	W1212 01:24:32.773449  664006 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:24:32.773456  664006 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:24:32.773519  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:24:32.798724  664006 cri.go:89] found id: ""
	I1212 01:24:32.798749  664006 logs.go:282] 0 containers: []
	W1212 01:24:32.798758  664006 logs.go:284] No container was found matching "etcd"
	I1212 01:24:32.798764  664006 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:24:32.798848  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:24:32.824840  664006 cri.go:89] found id: ""
	I1212 01:24:32.824865  664006 logs.go:282] 0 containers: []
	W1212 01:24:32.824873  664006 logs.go:284] No container was found matching "coredns"
	I1212 01:24:32.824879  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:24:32.824942  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:24:32.853432  664006 cri.go:89] found id: ""
	I1212 01:24:32.853457  664006 logs.go:282] 0 containers: []
	W1212 01:24:32.853466  664006 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:24:32.853479  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:24:32.853541  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:24:32.880468  664006 cri.go:89] found id: ""
	I1212 01:24:32.880495  664006 logs.go:282] 0 containers: []
	W1212 01:24:32.880504  664006 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:24:32.880510  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:24:32.880589  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:24:32.907098  664006 cri.go:89] found id: ""
	I1212 01:24:32.907122  664006 logs.go:282] 0 containers: []
	W1212 01:24:32.907130  664006 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:24:32.907137  664006 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:24:32.907197  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:24:32.933433  664006 cri.go:89] found id: ""
	I1212 01:24:32.933457  664006 logs.go:282] 0 containers: []
	W1212 01:24:32.933465  664006 logs.go:284] No container was found matching "kindnet"
	I1212 01:24:32.933472  664006 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:24:32.933533  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:24:32.961642  664006 cri.go:89] found id: ""
	I1212 01:24:32.961707  664006 logs.go:282] 0 containers: []
	W1212 01:24:32.961730  664006 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:24:32.961751  664006 logs.go:123] Gathering logs for kubelet ...
	I1212 01:24:32.961778  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:24:33.030825  664006 logs.go:123] Gathering logs for dmesg ...
	I1212 01:24:33.030868  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:24:33.049584  664006 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:24:33.049628  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:24:33.119468  664006 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:24:33.119487  664006 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:24:33.119500  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:24:33.151978  664006 logs.go:123] Gathering logs for container status ...
	I1212 01:24:33.152014  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:24:35.681066  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:24:35.691540  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:24:35.691620  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:24:35.717597  664006 cri.go:89] found id: ""
	I1212 01:24:35.717618  664006 logs.go:282] 0 containers: []
	W1212 01:24:35.717626  664006 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:24:35.717632  664006 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:24:35.717692  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:24:35.743961  664006 cri.go:89] found id: ""
	I1212 01:24:35.743983  664006 logs.go:282] 0 containers: []
	W1212 01:24:35.743991  664006 logs.go:284] No container was found matching "etcd"
	I1212 01:24:35.743997  664006 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:24:35.744057  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:24:35.771251  664006 cri.go:89] found id: ""
	I1212 01:24:35.771274  664006 logs.go:282] 0 containers: []
	W1212 01:24:35.771282  664006 logs.go:284] No container was found matching "coredns"
	I1212 01:24:35.771289  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:24:35.771362  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:24:35.797174  664006 cri.go:89] found id: ""
	I1212 01:24:35.797202  664006 logs.go:282] 0 containers: []
	W1212 01:24:35.797211  664006 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:24:35.797222  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:24:35.797294  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:24:35.822387  664006 cri.go:89] found id: ""
	I1212 01:24:35.822411  664006 logs.go:282] 0 containers: []
	W1212 01:24:35.822420  664006 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:24:35.822426  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:24:35.822485  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:24:35.850857  664006 cri.go:89] found id: ""
	I1212 01:24:35.850922  664006 logs.go:282] 0 containers: []
	W1212 01:24:35.850943  664006 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:24:35.850975  664006 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:24:35.851071  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:24:35.877411  664006 cri.go:89] found id: ""
	I1212 01:24:35.877437  664006 logs.go:282] 0 containers: []
	W1212 01:24:35.877446  664006 logs.go:284] No container was found matching "kindnet"
	I1212 01:24:35.877465  664006 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:24:35.877523  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:24:35.904084  664006 cri.go:89] found id: ""
	I1212 01:24:35.904108  664006 logs.go:282] 0 containers: []
	W1212 01:24:35.904117  664006 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:24:35.904125  664006 logs.go:123] Gathering logs for kubelet ...
	I1212 01:24:35.904171  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:24:35.975009  664006 logs.go:123] Gathering logs for dmesg ...
	I1212 01:24:35.975046  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:24:35.993247  664006 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:24:35.993305  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:24:36.074763  664006 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:24:36.074799  664006 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:24:36.074817  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:24:36.106284  664006 logs.go:123] Gathering logs for container status ...
	I1212 01:24:36.106369  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:24:38.635420  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:24:38.645714  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:24:38.645795  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:24:38.672546  664006 cri.go:89] found id: ""
	I1212 01:24:38.672567  664006 logs.go:282] 0 containers: []
	W1212 01:24:38.672576  664006 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:24:38.672581  664006 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:24:38.672639  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:24:38.698827  664006 cri.go:89] found id: ""
	I1212 01:24:38.698849  664006 logs.go:282] 0 containers: []
	W1212 01:24:38.698857  664006 logs.go:284] No container was found matching "etcd"
	I1212 01:24:38.698863  664006 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:24:38.698919  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:24:38.725133  664006 cri.go:89] found id: ""
	I1212 01:24:38.725216  664006 logs.go:282] 0 containers: []
	W1212 01:24:38.725251  664006 logs.go:284] No container was found matching "coredns"
	I1212 01:24:38.725270  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:24:38.725366  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:24:38.752887  664006 cri.go:89] found id: ""
	I1212 01:24:38.752913  664006 logs.go:282] 0 containers: []
	W1212 01:24:38.752922  664006 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:24:38.752927  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:24:38.752994  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:24:38.778412  664006 cri.go:89] found id: ""
	I1212 01:24:38.778436  664006 logs.go:282] 0 containers: []
	W1212 01:24:38.778445  664006 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:24:38.778451  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:24:38.778524  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:24:38.805296  664006 cri.go:89] found id: ""
	I1212 01:24:38.805321  664006 logs.go:282] 0 containers: []
	W1212 01:24:38.805330  664006 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:24:38.805337  664006 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:24:38.805396  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:24:38.830605  664006 cri.go:89] found id: ""
	I1212 01:24:38.830643  664006 logs.go:282] 0 containers: []
	W1212 01:24:38.830653  664006 logs.go:284] No container was found matching "kindnet"
	I1212 01:24:38.830659  664006 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:24:38.830758  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:24:38.856596  664006 cri.go:89] found id: ""
	I1212 01:24:38.856622  664006 logs.go:282] 0 containers: []
	W1212 01:24:38.856630  664006 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:24:38.856639  664006 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:24:38.856668  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:24:38.887923  664006 logs.go:123] Gathering logs for container status ...
	I1212 01:24:38.887959  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:24:38.915099  664006 logs.go:123] Gathering logs for kubelet ...
	I1212 01:24:38.915127  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:24:38.982876  664006 logs.go:123] Gathering logs for dmesg ...
	I1212 01:24:38.982909  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:24:38.999385  664006 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:24:38.999413  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:24:39.067045  664006 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:24:41.568469  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:24:41.578757  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:24:41.578830  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:24:41.604732  664006 cri.go:89] found id: ""
	I1212 01:24:41.604755  664006 logs.go:282] 0 containers: []
	W1212 01:24:41.604764  664006 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:24:41.604770  664006 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:24:41.604831  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:24:41.632547  664006 cri.go:89] found id: ""
	I1212 01:24:41.632572  664006 logs.go:282] 0 containers: []
	W1212 01:24:41.632581  664006 logs.go:284] No container was found matching "etcd"
	I1212 01:24:41.632588  664006 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:24:41.632664  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:24:41.660011  664006 cri.go:89] found id: ""
	I1212 01:24:41.660036  664006 logs.go:282] 0 containers: []
	W1212 01:24:41.660045  664006 logs.go:284] No container was found matching "coredns"
	I1212 01:24:41.660051  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:24:41.660119  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:24:41.686150  664006 cri.go:89] found id: ""
	I1212 01:24:41.686176  664006 logs.go:282] 0 containers: []
	W1212 01:24:41.686185  664006 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:24:41.686191  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:24:41.686259  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:24:41.711704  664006 cri.go:89] found id: ""
	I1212 01:24:41.711728  664006 logs.go:282] 0 containers: []
	W1212 01:24:41.711737  664006 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:24:41.711743  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:24:41.711822  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:24:41.736945  664006 cri.go:89] found id: ""
	I1212 01:24:41.736967  664006 logs.go:282] 0 containers: []
	W1212 01:24:41.736976  664006 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:24:41.736982  664006 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:24:41.737063  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:24:41.763299  664006 cri.go:89] found id: ""
	I1212 01:24:41.763321  664006 logs.go:282] 0 containers: []
	W1212 01:24:41.763329  664006 logs.go:284] No container was found matching "kindnet"
	I1212 01:24:41.763335  664006 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:24:41.763393  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:24:41.788029  664006 cri.go:89] found id: ""
	I1212 01:24:41.788106  664006 logs.go:282] 0 containers: []
	W1212 01:24:41.788122  664006 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:24:41.788135  664006 logs.go:123] Gathering logs for dmesg ...
	I1212 01:24:41.788147  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:24:41.804732  664006 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:24:41.804766  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:24:41.874485  664006 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:24:41.874547  664006 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:24:41.874566  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:24:41.909245  664006 logs.go:123] Gathering logs for container status ...
	I1212 01:24:41.909284  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:24:41.938170  664006 logs.go:123] Gathering logs for kubelet ...
	I1212 01:24:41.938202  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:24:44.507986  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:24:44.518741  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:24:44.518808  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:24:44.546949  664006 cri.go:89] found id: ""
	I1212 01:24:44.546971  664006 logs.go:282] 0 containers: []
	W1212 01:24:44.546979  664006 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:24:44.546986  664006 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:24:44.547075  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:24:44.577829  664006 cri.go:89] found id: ""
	I1212 01:24:44.577849  664006 logs.go:282] 0 containers: []
	W1212 01:24:44.577857  664006 logs.go:284] No container was found matching "etcd"
	I1212 01:24:44.577863  664006 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:24:44.577919  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:24:44.605204  664006 cri.go:89] found id: ""
	I1212 01:24:44.605225  664006 logs.go:282] 0 containers: []
	W1212 01:24:44.605234  664006 logs.go:284] No container was found matching "coredns"
	I1212 01:24:44.605240  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:24:44.605306  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:24:44.630235  664006 cri.go:89] found id: ""
	I1212 01:24:44.630307  664006 logs.go:282] 0 containers: []
	W1212 01:24:44.630330  664006 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:24:44.630348  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:24:44.630450  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:24:44.662168  664006 cri.go:89] found id: ""
	I1212 01:24:44.662235  664006 logs.go:282] 0 containers: []
	W1212 01:24:44.662257  664006 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:24:44.662269  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:24:44.662341  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:24:44.702651  664006 cri.go:89] found id: ""
	I1212 01:24:44.702753  664006 logs.go:282] 0 containers: []
	W1212 01:24:44.702777  664006 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:24:44.702796  664006 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:24:44.702893  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:24:44.732814  664006 cri.go:89] found id: ""
	I1212 01:24:44.732933  664006 logs.go:282] 0 containers: []
	W1212 01:24:44.732970  664006 logs.go:284] No container was found matching "kindnet"
	I1212 01:24:44.733007  664006 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:24:44.733229  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:24:44.789081  664006 cri.go:89] found id: ""
	I1212 01:24:44.789205  664006 logs.go:282] 0 containers: []
	W1212 01:24:44.789240  664006 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:24:44.789287  664006 logs.go:123] Gathering logs for kubelet ...
	I1212 01:24:44.789318  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:24:44.878404  664006 logs.go:123] Gathering logs for dmesg ...
	I1212 01:24:44.878532  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:24:44.901861  664006 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:24:44.901952  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:24:44.992160  664006 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:24:44.992258  664006 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:24:44.992307  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:24:45.026350  664006 logs.go:123] Gathering logs for container status ...
	I1212 01:24:45.026394  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:24:47.619512  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:24:47.629723  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:24:47.629790  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:24:47.657835  664006 cri.go:89] found id: ""
	I1212 01:24:47.657861  664006 logs.go:282] 0 containers: []
	W1212 01:24:47.657869  664006 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:24:47.657876  664006 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:24:47.657937  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:24:47.686183  664006 cri.go:89] found id: ""
	I1212 01:24:47.686207  664006 logs.go:282] 0 containers: []
	W1212 01:24:47.686216  664006 logs.go:284] No container was found matching "etcd"
	I1212 01:24:47.686222  664006 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:24:47.686284  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:24:47.711060  664006 cri.go:89] found id: ""
	I1212 01:24:47.711082  664006 logs.go:282] 0 containers: []
	W1212 01:24:47.711090  664006 logs.go:284] No container was found matching "coredns"
	I1212 01:24:47.711096  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:24:47.711159  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:24:47.737970  664006 cri.go:89] found id: ""
	I1212 01:24:47.737994  664006 logs.go:282] 0 containers: []
	W1212 01:24:47.738002  664006 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:24:47.738026  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:24:47.738313  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:24:47.769007  664006 cri.go:89] found id: ""
	I1212 01:24:47.769031  664006 logs.go:282] 0 containers: []
	W1212 01:24:47.769039  664006 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:24:47.769046  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:24:47.769113  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:24:47.802210  664006 cri.go:89] found id: ""
	I1212 01:24:47.802233  664006 logs.go:282] 0 containers: []
	W1212 01:24:47.802243  664006 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:24:47.802249  664006 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:24:47.802312  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:24:47.831288  664006 cri.go:89] found id: ""
	I1212 01:24:47.831309  664006 logs.go:282] 0 containers: []
	W1212 01:24:47.831318  664006 logs.go:284] No container was found matching "kindnet"
	I1212 01:24:47.831324  664006 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:24:47.831382  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:24:47.856101  664006 cri.go:89] found id: ""
	I1212 01:24:47.856123  664006 logs.go:282] 0 containers: []
	W1212 01:24:47.856131  664006 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:24:47.856140  664006 logs.go:123] Gathering logs for kubelet ...
	I1212 01:24:47.856154  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:24:47.927887  664006 logs.go:123] Gathering logs for dmesg ...
	I1212 01:24:47.927926  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:24:47.944656  664006 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:24:47.944686  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:24:48.013121  664006 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:24:48.013144  664006 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:24:48.013159  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:24:48.045751  664006 logs.go:123] Gathering logs for container status ...
	I1212 01:24:48.045787  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:24:50.577815  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:24:50.587867  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:24:50.587940  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:24:50.614382  664006 cri.go:89] found id: ""
	I1212 01:24:50.614404  664006 logs.go:282] 0 containers: []
	W1212 01:24:50.614412  664006 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:24:50.614418  664006 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:24:50.614477  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:24:50.639272  664006 cri.go:89] found id: ""
	I1212 01:24:50.639301  664006 logs.go:282] 0 containers: []
	W1212 01:24:50.639311  664006 logs.go:284] No container was found matching "etcd"
	I1212 01:24:50.639318  664006 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:24:50.639379  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:24:50.668262  664006 cri.go:89] found id: ""
	I1212 01:24:50.668289  664006 logs.go:282] 0 containers: []
	W1212 01:24:50.668299  664006 logs.go:284] No container was found matching "coredns"
	I1212 01:24:50.668305  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:24:50.668364  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:24:50.694560  664006 cri.go:89] found id: ""
	I1212 01:24:50.694582  664006 logs.go:282] 0 containers: []
	W1212 01:24:50.694591  664006 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:24:50.694598  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:24:50.694657  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:24:50.720273  664006 cri.go:89] found id: ""
	I1212 01:24:50.720295  664006 logs.go:282] 0 containers: []
	W1212 01:24:50.720304  664006 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:24:50.720310  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:24:50.720369  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:24:50.749661  664006 cri.go:89] found id: ""
	I1212 01:24:50.749684  664006 logs.go:282] 0 containers: []
	W1212 01:24:50.749693  664006 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:24:50.749699  664006 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:24:50.749758  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:24:50.777694  664006 cri.go:89] found id: ""
	I1212 01:24:50.777720  664006 logs.go:282] 0 containers: []
	W1212 01:24:50.777730  664006 logs.go:284] No container was found matching "kindnet"
	I1212 01:24:50.777736  664006 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:24:50.777796  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:24:50.805525  664006 cri.go:89] found id: ""
	I1212 01:24:50.805547  664006 logs.go:282] 0 containers: []
	W1212 01:24:50.805555  664006 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:24:50.805565  664006 logs.go:123] Gathering logs for kubelet ...
	I1212 01:24:50.805576  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:24:50.874803  664006 logs.go:123] Gathering logs for dmesg ...
	I1212 01:24:50.874839  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:24:50.891811  664006 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:24:50.891841  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:24:50.962026  664006 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:24:50.962045  664006 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:24:50.962058  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:24:50.993877  664006 logs.go:123] Gathering logs for container status ...
	I1212 01:24:50.993912  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:24:53.530067  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:24:53.540029  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:24:53.540098  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:24:53.566062  664006 cri.go:89] found id: ""
	I1212 01:24:53.566084  664006 logs.go:282] 0 containers: []
	W1212 01:24:53.566093  664006 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:24:53.566099  664006 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:24:53.566157  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:24:53.590709  664006 cri.go:89] found id: ""
	I1212 01:24:53.590737  664006 logs.go:282] 0 containers: []
	W1212 01:24:53.590745  664006 logs.go:284] No container was found matching "etcd"
	I1212 01:24:53.590752  664006 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:24:53.590811  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:24:53.615589  664006 cri.go:89] found id: ""
	I1212 01:24:53.615613  664006 logs.go:282] 0 containers: []
	W1212 01:24:53.615621  664006 logs.go:284] No container was found matching "coredns"
	I1212 01:24:53.615627  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:24:53.615690  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:24:53.640129  664006 cri.go:89] found id: ""
	I1212 01:24:53.640162  664006 logs.go:282] 0 containers: []
	W1212 01:24:53.640172  664006 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:24:53.640179  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:24:53.640264  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:24:53.665674  664006 cri.go:89] found id: ""
	I1212 01:24:53.665695  664006 logs.go:282] 0 containers: []
	W1212 01:24:53.665703  664006 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:24:53.665709  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:24:53.665769  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:24:53.693304  664006 cri.go:89] found id: ""
	I1212 01:24:53.693327  664006 logs.go:282] 0 containers: []
	W1212 01:24:53.693336  664006 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:24:53.693342  664006 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:24:53.693404  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:24:53.718566  664006 cri.go:89] found id: ""
	I1212 01:24:53.718588  664006 logs.go:282] 0 containers: []
	W1212 01:24:53.718597  664006 logs.go:284] No container was found matching "kindnet"
	I1212 01:24:53.718603  664006 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:24:53.718669  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:24:53.743937  664006 cri.go:89] found id: ""
	I1212 01:24:53.744002  664006 logs.go:282] 0 containers: []
	W1212 01:24:53.744017  664006 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:24:53.744026  664006 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:24:53.744039  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:24:53.808181  664006 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:24:53.808203  664006 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:24:53.808215  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:24:53.839384  664006 logs.go:123] Gathering logs for container status ...
	I1212 01:24:53.839418  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:24:53.868047  664006 logs.go:123] Gathering logs for kubelet ...
	I1212 01:24:53.868075  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:24:53.940979  664006 logs.go:123] Gathering logs for dmesg ...
	I1212 01:24:53.941017  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:24:56.459047  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:24:56.469300  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:24:56.469374  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:24:56.500913  664006 cri.go:89] found id: ""
	I1212 01:24:56.500945  664006 logs.go:282] 0 containers: []
	W1212 01:24:56.500954  664006 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:24:56.500961  664006 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:24:56.501019  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:24:56.535778  664006 cri.go:89] found id: ""
	I1212 01:24:56.535801  664006 logs.go:282] 0 containers: []
	W1212 01:24:56.535809  664006 logs.go:284] No container was found matching "etcd"
	I1212 01:24:56.535815  664006 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:24:56.535876  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:24:56.564147  664006 cri.go:89] found id: ""
	I1212 01:24:56.564174  664006 logs.go:282] 0 containers: []
	W1212 01:24:56.564184  664006 logs.go:284] No container was found matching "coredns"
	I1212 01:24:56.564191  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:24:56.564250  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:24:56.588900  664006 cri.go:89] found id: ""
	I1212 01:24:56.588923  664006 logs.go:282] 0 containers: []
	W1212 01:24:56.588937  664006 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:24:56.588944  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:24:56.589001  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:24:56.613930  664006 cri.go:89] found id: ""
	I1212 01:24:56.613955  664006 logs.go:282] 0 containers: []
	W1212 01:24:56.613965  664006 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:24:56.613971  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:24:56.614031  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:24:56.638558  664006 cri.go:89] found id: ""
	I1212 01:24:56.638581  664006 logs.go:282] 0 containers: []
	W1212 01:24:56.638590  664006 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:24:56.638596  664006 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:24:56.638655  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:24:56.668008  664006 cri.go:89] found id: ""
	I1212 01:24:56.668029  664006 logs.go:282] 0 containers: []
	W1212 01:24:56.668038  664006 logs.go:284] No container was found matching "kindnet"
	I1212 01:24:56.668044  664006 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:24:56.668103  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:24:56.696748  664006 cri.go:89] found id: ""
	I1212 01:24:56.696770  664006 logs.go:282] 0 containers: []
	W1212 01:24:56.696778  664006 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:24:56.696787  664006 logs.go:123] Gathering logs for kubelet ...
	I1212 01:24:56.696799  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:24:56.764511  664006 logs.go:123] Gathering logs for dmesg ...
	I1212 01:24:56.764546  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:24:56.780295  664006 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:24:56.780324  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:24:56.845435  664006 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:24:56.845454  664006 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:24:56.845467  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:24:56.878038  664006 logs.go:123] Gathering logs for container status ...
	I1212 01:24:56.878072  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:24:59.408742  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:24:59.418980  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:24:59.419088  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:24:59.443604  664006 cri.go:89] found id: ""
	I1212 01:24:59.443627  664006 logs.go:282] 0 containers: []
	W1212 01:24:59.443636  664006 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:24:59.443643  664006 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:24:59.443732  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:24:59.472583  664006 cri.go:89] found id: ""
	I1212 01:24:59.472605  664006 logs.go:282] 0 containers: []
	W1212 01:24:59.472615  664006 logs.go:284] No container was found matching "etcd"
	I1212 01:24:59.472621  664006 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:24:59.472740  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:24:59.505464  664006 cri.go:89] found id: ""
	I1212 01:24:59.505490  664006 logs.go:282] 0 containers: []
	W1212 01:24:59.505499  664006 logs.go:284] No container was found matching "coredns"
	I1212 01:24:59.505505  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:24:59.505623  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:24:59.540864  664006 cri.go:89] found id: ""
	I1212 01:24:59.540886  664006 logs.go:282] 0 containers: []
	W1212 01:24:59.540894  664006 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:24:59.540900  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:24:59.541061  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:24:59.570189  664006 cri.go:89] found id: ""
	I1212 01:24:59.570212  664006 logs.go:282] 0 containers: []
	W1212 01:24:59.570222  664006 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:24:59.570227  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:24:59.570345  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:24:59.594840  664006 cri.go:89] found id: ""
	I1212 01:24:59.594864  664006 logs.go:282] 0 containers: []
	W1212 01:24:59.594873  664006 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:24:59.594880  664006 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:24:59.594997  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:24:59.620456  664006 cri.go:89] found id: ""
	I1212 01:24:59.620478  664006 logs.go:282] 0 containers: []
	W1212 01:24:59.620487  664006 logs.go:284] No container was found matching "kindnet"
	I1212 01:24:59.620493  664006 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:24:59.620572  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:24:59.645829  664006 cri.go:89] found id: ""
	I1212 01:24:59.645854  664006 logs.go:282] 0 containers: []
	W1212 01:24:59.645863  664006 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:24:59.645873  664006 logs.go:123] Gathering logs for kubelet ...
	I1212 01:24:59.645908  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:24:59.714302  664006 logs.go:123] Gathering logs for dmesg ...
	I1212 01:24:59.714337  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:24:59.731060  664006 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:24:59.731088  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:24:59.798821  664006 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:24:59.798842  664006 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:24:59.798854  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:24:59.829576  664006 logs.go:123] Gathering logs for container status ...
	I1212 01:24:59.829609  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:25:02.360641  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:25:02.371242  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:25:02.371327  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:25:02.397090  664006 cri.go:89] found id: ""
	I1212 01:25:02.397117  664006 logs.go:282] 0 containers: []
	W1212 01:25:02.397126  664006 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:25:02.397132  664006 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:25:02.397193  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:25:02.423254  664006 cri.go:89] found id: ""
	I1212 01:25:02.423276  664006 logs.go:282] 0 containers: []
	W1212 01:25:02.423285  664006 logs.go:284] No container was found matching "etcd"
	I1212 01:25:02.423290  664006 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:25:02.423351  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:25:02.453537  664006 cri.go:89] found id: ""
	I1212 01:25:02.453561  664006 logs.go:282] 0 containers: []
	W1212 01:25:02.453570  664006 logs.go:284] No container was found matching "coredns"
	I1212 01:25:02.453576  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:25:02.453643  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:25:02.486265  664006 cri.go:89] found id: ""
	I1212 01:25:02.486292  664006 logs.go:282] 0 containers: []
	W1212 01:25:02.486303  664006 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:25:02.486309  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:25:02.486368  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:25:02.537278  664006 cri.go:89] found id: ""
	I1212 01:25:02.537305  664006 logs.go:282] 0 containers: []
	W1212 01:25:02.537314  664006 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:25:02.537321  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:25:02.537381  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:25:02.564764  664006 cri.go:89] found id: ""
	I1212 01:25:02.564788  664006 logs.go:282] 0 containers: []
	W1212 01:25:02.564798  664006 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:25:02.564805  664006 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:25:02.564871  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:25:02.590082  664006 cri.go:89] found id: ""
	I1212 01:25:02.590118  664006 logs.go:282] 0 containers: []
	W1212 01:25:02.590127  664006 logs.go:284] No container was found matching "kindnet"
	I1212 01:25:02.590149  664006 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:25:02.590231  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:25:02.617779  664006 cri.go:89] found id: ""
	I1212 01:25:02.617807  664006 logs.go:282] 0 containers: []
	W1212 01:25:02.617816  664006 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:25:02.617826  664006 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:25:02.617838  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:25:02.687013  664006 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:25:02.687032  664006 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:25:02.687044  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:25:02.717622  664006 logs.go:123] Gathering logs for container status ...
	I1212 01:25:02.717656  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:25:02.745601  664006 logs.go:123] Gathering logs for kubelet ...
	I1212 01:25:02.745626  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:25:02.813679  664006 logs.go:123] Gathering logs for dmesg ...
	I1212 01:25:02.813716  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:25:05.330375  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:25:05.340187  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:25:05.340279  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:25:05.371216  664006 cri.go:89] found id: ""
	I1212 01:25:05.371239  664006 logs.go:282] 0 containers: []
	W1212 01:25:05.371247  664006 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:25:05.371253  664006 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:25:05.371311  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:25:05.396801  664006 cri.go:89] found id: ""
	I1212 01:25:05.396833  664006 logs.go:282] 0 containers: []
	W1212 01:25:05.396842  664006 logs.go:284] No container was found matching "etcd"
	I1212 01:25:05.396849  664006 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:25:05.396913  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:25:05.422316  664006 cri.go:89] found id: ""
	I1212 01:25:05.422338  664006 logs.go:282] 0 containers: []
	W1212 01:25:05.422346  664006 logs.go:284] No container was found matching "coredns"
	I1212 01:25:05.422352  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:25:05.422411  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:25:05.447718  664006 cri.go:89] found id: ""
	I1212 01:25:05.447788  664006 logs.go:282] 0 containers: []
	W1212 01:25:05.447811  664006 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:25:05.447829  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:25:05.447916  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:25:05.480906  664006 cri.go:89] found id: ""
	I1212 01:25:05.480943  664006 logs.go:282] 0 containers: []
	W1212 01:25:05.480952  664006 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:25:05.480976  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:25:05.481058  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:25:05.531349  664006 cri.go:89] found id: ""
	I1212 01:25:05.531373  664006 logs.go:282] 0 containers: []
	W1212 01:25:05.531404  664006 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:25:05.531412  664006 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:25:05.531496  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:25:05.573243  664006 cri.go:89] found id: ""
	I1212 01:25:05.573267  664006 logs.go:282] 0 containers: []
	W1212 01:25:05.573275  664006 logs.go:284] No container was found matching "kindnet"
	I1212 01:25:05.573291  664006 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:25:05.573384  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:25:05.599572  664006 cri.go:89] found id: ""
	I1212 01:25:05.599607  664006 logs.go:282] 0 containers: []
	W1212 01:25:05.599618  664006 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:25:05.599643  664006 logs.go:123] Gathering logs for kubelet ...
	I1212 01:25:05.599667  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:25:05.667712  664006 logs.go:123] Gathering logs for dmesg ...
	I1212 01:25:05.667750  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:25:05.683680  664006 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:25:05.683709  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:25:05.780571  664006 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:25:05.780591  664006 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:25:05.780604  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:25:05.822239  664006 logs.go:123] Gathering logs for container status ...
	I1212 01:25:05.822277  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:25:08.372916  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:25:08.382990  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:25:08.383070  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:25:08.407389  664006 cri.go:89] found id: ""
	I1212 01:25:08.407413  664006 logs.go:282] 0 containers: []
	W1212 01:25:08.407421  664006 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:25:08.407428  664006 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:25:08.407486  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:25:08.433239  664006 cri.go:89] found id: ""
	I1212 01:25:08.433263  664006 logs.go:282] 0 containers: []
	W1212 01:25:08.433271  664006 logs.go:284] No container was found matching "etcd"
	I1212 01:25:08.433278  664006 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:25:08.433337  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:25:08.458805  664006 cri.go:89] found id: ""
	I1212 01:25:08.458826  664006 logs.go:282] 0 containers: []
	W1212 01:25:08.458834  664006 logs.go:284] No container was found matching "coredns"
	I1212 01:25:08.458840  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:25:08.458898  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:25:08.496507  664006 cri.go:89] found id: ""
	I1212 01:25:08.496530  664006 logs.go:282] 0 containers: []
	W1212 01:25:08.496539  664006 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:25:08.496545  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:25:08.496603  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:25:08.537550  664006 cri.go:89] found id: ""
	I1212 01:25:08.537577  664006 logs.go:282] 0 containers: []
	W1212 01:25:08.537585  664006 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:25:08.537591  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:25:08.537651  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:25:08.568053  664006 cri.go:89] found id: ""
	I1212 01:25:08.568129  664006 logs.go:282] 0 containers: []
	W1212 01:25:08.568151  664006 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:25:08.568163  664006 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:25:08.568233  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:25:08.595119  664006 cri.go:89] found id: ""
	I1212 01:25:08.595141  664006 logs.go:282] 0 containers: []
	W1212 01:25:08.595149  664006 logs.go:284] No container was found matching "kindnet"
	I1212 01:25:08.595156  664006 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:25:08.595219  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:25:08.621030  664006 cri.go:89] found id: ""
	I1212 01:25:08.621054  664006 logs.go:282] 0 containers: []
	W1212 01:25:08.621063  664006 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:25:08.621071  664006 logs.go:123] Gathering logs for kubelet ...
	I1212 01:25:08.621111  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:25:08.689429  664006 logs.go:123] Gathering logs for dmesg ...
	I1212 01:25:08.689464  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:25:08.705427  664006 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:25:08.705454  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:25:08.776111  664006 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:25:08.776130  664006 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:25:08.776143  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:25:08.808307  664006 logs.go:123] Gathering logs for container status ...
	I1212 01:25:08.808341  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:25:11.338028  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:25:11.348069  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:25:11.348142  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:25:11.373382  664006 cri.go:89] found id: ""
	I1212 01:25:11.373449  664006 logs.go:282] 0 containers: []
	W1212 01:25:11.373471  664006 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:25:11.373491  664006 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:25:11.373576  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:25:11.399339  664006 cri.go:89] found id: ""
	I1212 01:25:11.399362  664006 logs.go:282] 0 containers: []
	W1212 01:25:11.399370  664006 logs.go:284] No container was found matching "etcd"
	I1212 01:25:11.399376  664006 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:25:11.399438  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:25:11.423906  664006 cri.go:89] found id: ""
	I1212 01:25:11.423929  664006 logs.go:282] 0 containers: []
	W1212 01:25:11.423937  664006 logs.go:284] No container was found matching "coredns"
	I1212 01:25:11.423943  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:25:11.424008  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:25:11.452102  664006 cri.go:89] found id: ""
	I1212 01:25:11.452130  664006 logs.go:282] 0 containers: []
	W1212 01:25:11.452139  664006 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:25:11.452146  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:25:11.452209  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:25:11.491984  664006 cri.go:89] found id: ""
	I1212 01:25:11.492009  664006 logs.go:282] 0 containers: []
	W1212 01:25:11.492025  664006 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:25:11.492031  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:25:11.492099  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:25:11.527550  664006 cri.go:89] found id: ""
	I1212 01:25:11.527577  664006 logs.go:282] 0 containers: []
	W1212 01:25:11.527593  664006 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:25:11.527600  664006 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:25:11.527678  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:25:11.558425  664006 cri.go:89] found id: ""
	I1212 01:25:11.558459  664006 logs.go:282] 0 containers: []
	W1212 01:25:11.558470  664006 logs.go:284] No container was found matching "kindnet"
	I1212 01:25:11.558476  664006 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:25:11.558544  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:25:11.584665  664006 cri.go:89] found id: ""
	I1212 01:25:11.584688  664006 logs.go:282] 0 containers: []
	W1212 01:25:11.584697  664006 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:25:11.584706  664006 logs.go:123] Gathering logs for kubelet ...
	I1212 01:25:11.584718  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:25:11.652575  664006 logs.go:123] Gathering logs for dmesg ...
	I1212 01:25:11.652611  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:25:11.668644  664006 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:25:11.668716  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:25:11.732259  664006 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:25:11.732278  664006 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:25:11.732293  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:25:11.764933  664006 logs.go:123] Gathering logs for container status ...
	I1212 01:25:11.764967  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:25:14.296633  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:25:14.307213  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:25:14.307286  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:25:14.338840  664006 cri.go:89] found id: ""
	I1212 01:25:14.338866  664006 logs.go:282] 0 containers: []
	W1212 01:25:14.338875  664006 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:25:14.338882  664006 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:25:14.338945  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:25:14.369795  664006 cri.go:89] found id: ""
	I1212 01:25:14.369817  664006 logs.go:282] 0 containers: []
	W1212 01:25:14.369826  664006 logs.go:284] No container was found matching "etcd"
	I1212 01:25:14.369832  664006 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:25:14.369912  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:25:14.398315  664006 cri.go:89] found id: ""
	I1212 01:25:14.398338  664006 logs.go:282] 0 containers: []
	W1212 01:25:14.398347  664006 logs.go:284] No container was found matching "coredns"
	I1212 01:25:14.398353  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:25:14.398417  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:25:14.423135  664006 cri.go:89] found id: ""
	I1212 01:25:14.423158  664006 logs.go:282] 0 containers: []
	W1212 01:25:14.423167  664006 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:25:14.423173  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:25:14.423233  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:25:14.453509  664006 cri.go:89] found id: ""
	I1212 01:25:14.453536  664006 logs.go:282] 0 containers: []
	W1212 01:25:14.453553  664006 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:25:14.453560  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:25:14.453636  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:25:14.490164  664006 cri.go:89] found id: ""
	I1212 01:25:14.490211  664006 logs.go:282] 0 containers: []
	W1212 01:25:14.490220  664006 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:25:14.490237  664006 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:25:14.490317  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:25:14.524066  664006 cri.go:89] found id: ""
	I1212 01:25:14.524102  664006 logs.go:282] 0 containers: []
	W1212 01:25:14.524111  664006 logs.go:284] No container was found matching "kindnet"
	I1212 01:25:14.524117  664006 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:25:14.524182  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:25:14.557103  664006 cri.go:89] found id: ""
	I1212 01:25:14.557149  664006 logs.go:282] 0 containers: []
	W1212 01:25:14.557158  664006 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:25:14.557172  664006 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:25:14.557184  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:25:14.589172  664006 logs.go:123] Gathering logs for container status ...
	I1212 01:25:14.589207  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:25:14.622940  664006 logs.go:123] Gathering logs for kubelet ...
	I1212 01:25:14.622964  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:25:14.692146  664006 logs.go:123] Gathering logs for dmesg ...
	I1212 01:25:14.692182  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:25:14.708521  664006 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:25:14.708548  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:25:14.779059  664006 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:25:17.279310  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:25:17.289761  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:25:17.289864  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:25:17.315055  664006 cri.go:89] found id: ""
	I1212 01:25:17.315077  664006 logs.go:282] 0 containers: []
	W1212 01:25:17.315086  664006 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:25:17.315092  664006 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:25:17.315167  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:25:17.343762  664006 cri.go:89] found id: ""
	I1212 01:25:17.343833  664006 logs.go:282] 0 containers: []
	W1212 01:25:17.343855  664006 logs.go:284] No container was found matching "etcd"
	I1212 01:25:17.343873  664006 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:25:17.343964  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:25:17.370390  664006 cri.go:89] found id: ""
	I1212 01:25:17.370414  664006 logs.go:282] 0 containers: []
	W1212 01:25:17.370423  664006 logs.go:284] No container was found matching "coredns"
	I1212 01:25:17.370428  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:25:17.370486  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:25:17.396604  664006 cri.go:89] found id: ""
	I1212 01:25:17.396627  664006 logs.go:282] 0 containers: []
	W1212 01:25:17.396635  664006 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:25:17.396641  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:25:17.396701  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:25:17.426613  664006 cri.go:89] found id: ""
	I1212 01:25:17.426650  664006 logs.go:282] 0 containers: []
	W1212 01:25:17.426660  664006 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:25:17.426672  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:25:17.426760  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:25:17.457567  664006 cri.go:89] found id: ""
	I1212 01:25:17.457601  664006 logs.go:282] 0 containers: []
	W1212 01:25:17.457611  664006 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:25:17.457618  664006 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:25:17.457676  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:25:17.502654  664006 cri.go:89] found id: ""
	I1212 01:25:17.502677  664006 logs.go:282] 0 containers: []
	W1212 01:25:17.502702  664006 logs.go:284] No container was found matching "kindnet"
	I1212 01:25:17.502708  664006 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:25:17.502778  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:25:17.559050  664006 cri.go:89] found id: ""
	I1212 01:25:17.559076  664006 logs.go:282] 0 containers: []
	W1212 01:25:17.559086  664006 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:25:17.559095  664006 logs.go:123] Gathering logs for kubelet ...
	I1212 01:25:17.559106  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:25:17.635475  664006 logs.go:123] Gathering logs for dmesg ...
	I1212 01:25:17.635516  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:25:17.652264  664006 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:25:17.652349  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:25:17.718462  664006 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:25:17.718525  664006 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:25:17.718553  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:25:17.750401  664006 logs.go:123] Gathering logs for container status ...
	I1212 01:25:17.750438  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:25:20.279521  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:25:20.289933  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:25:20.290002  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:25:20.317305  664006 cri.go:89] found id: ""
	I1212 01:25:20.317328  664006 logs.go:282] 0 containers: []
	W1212 01:25:20.317337  664006 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:25:20.317343  664006 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:25:20.317402  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:25:20.341302  664006 cri.go:89] found id: ""
	I1212 01:25:20.341324  664006 logs.go:282] 0 containers: []
	W1212 01:25:20.341341  664006 logs.go:284] No container was found matching "etcd"
	I1212 01:25:20.341347  664006 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:25:20.341403  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:25:20.369134  664006 cri.go:89] found id: ""
	I1212 01:25:20.369155  664006 logs.go:282] 0 containers: []
	W1212 01:25:20.369164  664006 logs.go:284] No container was found matching "coredns"
	I1212 01:25:20.369170  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:25:20.369235  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:25:20.394593  664006 cri.go:89] found id: ""
	I1212 01:25:20.394666  664006 logs.go:282] 0 containers: []
	W1212 01:25:20.394719  664006 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:25:20.394748  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:25:20.394827  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:25:20.419075  664006 cri.go:89] found id: ""
	I1212 01:25:20.419099  664006 logs.go:282] 0 containers: []
	W1212 01:25:20.419107  664006 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:25:20.419113  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:25:20.419170  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:25:20.443905  664006 cri.go:89] found id: ""
	I1212 01:25:20.443928  664006 logs.go:282] 0 containers: []
	W1212 01:25:20.443936  664006 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:25:20.443943  664006 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:25:20.444013  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:25:20.472691  664006 cri.go:89] found id: ""
	I1212 01:25:20.472717  664006 logs.go:282] 0 containers: []
	W1212 01:25:20.472726  664006 logs.go:284] No container was found matching "kindnet"
	I1212 01:25:20.472754  664006 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:25:20.472839  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:25:20.507025  664006 cri.go:89] found id: ""
	I1212 01:25:20.507052  664006 logs.go:282] 0 containers: []
	W1212 01:25:20.507061  664006 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:25:20.507093  664006 logs.go:123] Gathering logs for kubelet ...
	I1212 01:25:20.507116  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:25:20.581925  664006 logs.go:123] Gathering logs for dmesg ...
	I1212 01:25:20.581958  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:25:20.598191  664006 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:25:20.598271  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:25:20.662970  664006 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:25:20.662995  664006 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:25:20.663007  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:25:20.694545  664006 logs.go:123] Gathering logs for container status ...
	I1212 01:25:20.694578  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:25:23.225833  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:25:23.235970  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:25:23.236043  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:25:23.261003  664006 cri.go:89] found id: ""
	I1212 01:25:23.261028  664006 logs.go:282] 0 containers: []
	W1212 01:25:23.261037  664006 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:25:23.261044  664006 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:25:23.261107  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:25:23.287298  664006 cri.go:89] found id: ""
	I1212 01:25:23.287320  664006 logs.go:282] 0 containers: []
	W1212 01:25:23.287328  664006 logs.go:284] No container was found matching "etcd"
	I1212 01:25:23.287335  664006 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:25:23.287397  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:25:23.314016  664006 cri.go:89] found id: ""
	I1212 01:25:23.314037  664006 logs.go:282] 0 containers: []
	W1212 01:25:23.314045  664006 logs.go:284] No container was found matching "coredns"
	I1212 01:25:23.314052  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:25:23.314111  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:25:23.340288  664006 cri.go:89] found id: ""
	I1212 01:25:23.340312  664006 logs.go:282] 0 containers: []
	W1212 01:25:23.340321  664006 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:25:23.340327  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:25:23.340394  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:25:23.366496  664006 cri.go:89] found id: ""
	I1212 01:25:23.366521  664006 logs.go:282] 0 containers: []
	W1212 01:25:23.366530  664006 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:25:23.366536  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:25:23.366594  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:25:23.393883  664006 cri.go:89] found id: ""
	I1212 01:25:23.393906  664006 logs.go:282] 0 containers: []
	W1212 01:25:23.393915  664006 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:25:23.393921  664006 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:25:23.393993  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:25:23.420129  664006 cri.go:89] found id: ""
	I1212 01:25:23.420151  664006 logs.go:282] 0 containers: []
	W1212 01:25:23.420159  664006 logs.go:284] No container was found matching "kindnet"
	I1212 01:25:23.420165  664006 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:25:23.420224  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:25:23.445068  664006 cri.go:89] found id: ""
	I1212 01:25:23.445092  664006 logs.go:282] 0 containers: []
	W1212 01:25:23.445100  664006 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:25:23.445109  664006 logs.go:123] Gathering logs for kubelet ...
	I1212 01:25:23.445120  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:25:23.520069  664006 logs.go:123] Gathering logs for dmesg ...
	I1212 01:25:23.520162  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:25:23.539167  664006 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:25:23.539192  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:25:23.628764  664006 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:25:23.628788  664006 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:25:23.628800  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:25:23.667208  664006 logs.go:123] Gathering logs for container status ...
	I1212 01:25:23.667294  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:25:26.205228  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:25:26.215414  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:25:26.215507  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:25:26.248756  664006 cri.go:89] found id: ""
	I1212 01:25:26.248779  664006 logs.go:282] 0 containers: []
	W1212 01:25:26.248787  664006 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:25:26.248793  664006 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:25:26.248861  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:25:26.275936  664006 cri.go:89] found id: ""
	I1212 01:25:26.275960  664006 logs.go:282] 0 containers: []
	W1212 01:25:26.275969  664006 logs.go:284] No container was found matching "etcd"
	I1212 01:25:26.275974  664006 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:25:26.276034  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:25:26.301631  664006 cri.go:89] found id: ""
	I1212 01:25:26.301653  664006 logs.go:282] 0 containers: []
	W1212 01:25:26.301662  664006 logs.go:284] No container was found matching "coredns"
	I1212 01:25:26.301668  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:25:26.301725  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:25:26.329033  664006 cri.go:89] found id: ""
	I1212 01:25:26.329060  664006 logs.go:282] 0 containers: []
	W1212 01:25:26.329069  664006 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:25:26.329075  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:25:26.329132  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:25:26.353661  664006 cri.go:89] found id: ""
	I1212 01:25:26.353684  664006 logs.go:282] 0 containers: []
	W1212 01:25:26.353692  664006 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:25:26.353698  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:25:26.353761  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:25:26.383504  664006 cri.go:89] found id: ""
	I1212 01:25:26.383526  664006 logs.go:282] 0 containers: []
	W1212 01:25:26.383534  664006 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:25:26.383541  664006 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:25:26.383599  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:25:26.408983  664006 cri.go:89] found id: ""
	I1212 01:25:26.409007  664006 logs.go:282] 0 containers: []
	W1212 01:25:26.409016  664006 logs.go:284] No container was found matching "kindnet"
	I1212 01:25:26.409022  664006 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:25:26.409080  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:25:26.433916  664006 cri.go:89] found id: ""
	I1212 01:25:26.433938  664006 logs.go:282] 0 containers: []
	W1212 01:25:26.433947  664006 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:25:26.433956  664006 logs.go:123] Gathering logs for container status ...
	I1212 01:25:26.433968  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:25:26.465247  664006 logs.go:123] Gathering logs for kubelet ...
	I1212 01:25:26.465276  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:25:26.562341  664006 logs.go:123] Gathering logs for dmesg ...
	I1212 01:25:26.562384  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:25:26.578726  664006 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:25:26.578756  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:25:26.645299  664006 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:25:26.645370  664006 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:25:26.645399  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:25:29.177048  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:25:29.187183  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:25:29.187256  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:25:29.213107  664006 cri.go:89] found id: ""
	I1212 01:25:29.213131  664006 logs.go:282] 0 containers: []
	W1212 01:25:29.213140  664006 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:25:29.213145  664006 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:25:29.213203  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:25:29.238337  664006 cri.go:89] found id: ""
	I1212 01:25:29.238362  664006 logs.go:282] 0 containers: []
	W1212 01:25:29.238370  664006 logs.go:284] No container was found matching "etcd"
	I1212 01:25:29.238376  664006 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:25:29.238435  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:25:29.263621  664006 cri.go:89] found id: ""
	I1212 01:25:29.263646  664006 logs.go:282] 0 containers: []
	W1212 01:25:29.263654  664006 logs.go:284] No container was found matching "coredns"
	I1212 01:25:29.263660  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:25:29.263717  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:25:29.288846  664006 cri.go:89] found id: ""
	I1212 01:25:29.288871  664006 logs.go:282] 0 containers: []
	W1212 01:25:29.288880  664006 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:25:29.288887  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:25:29.288978  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:25:29.314331  664006 cri.go:89] found id: ""
	I1212 01:25:29.314356  664006 logs.go:282] 0 containers: []
	W1212 01:25:29.314365  664006 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:25:29.314371  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:25:29.314430  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:25:29.340992  664006 cri.go:89] found id: ""
	I1212 01:25:29.341018  664006 logs.go:282] 0 containers: []
	W1212 01:25:29.341028  664006 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:25:29.341034  664006 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:25:29.341095  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:25:29.369838  664006 cri.go:89] found id: ""
	I1212 01:25:29.369863  664006 logs.go:282] 0 containers: []
	W1212 01:25:29.369873  664006 logs.go:284] No container was found matching "kindnet"
	I1212 01:25:29.369879  664006 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:25:29.369939  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:25:29.394427  664006 cri.go:89] found id: ""
	I1212 01:25:29.394453  664006 logs.go:282] 0 containers: []
	W1212 01:25:29.394463  664006 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:25:29.394472  664006 logs.go:123] Gathering logs for kubelet ...
	I1212 01:25:29.394504  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:25:29.463518  664006 logs.go:123] Gathering logs for dmesg ...
	I1212 01:25:29.463556  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:25:29.480479  664006 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:25:29.480509  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:25:29.566471  664006 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:25:29.566494  664006 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:25:29.566506  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:25:29.598661  664006 logs.go:123] Gathering logs for container status ...
	I1212 01:25:29.598703  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:25:32.127579  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:25:32.138549  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:25:32.138621  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:25:32.165074  664006 cri.go:89] found id: ""
	I1212 01:25:32.165099  664006 logs.go:282] 0 containers: []
	W1212 01:25:32.165108  664006 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:25:32.165114  664006 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:25:32.165180  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:25:32.193380  664006 cri.go:89] found id: ""
	I1212 01:25:32.193403  664006 logs.go:282] 0 containers: []
	W1212 01:25:32.193412  664006 logs.go:284] No container was found matching "etcd"
	I1212 01:25:32.193418  664006 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:25:32.193483  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:25:32.226032  664006 cri.go:89] found id: ""
	I1212 01:25:32.226053  664006 logs.go:282] 0 containers: []
	W1212 01:25:32.226062  664006 logs.go:284] No container was found matching "coredns"
	I1212 01:25:32.226068  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:25:32.226125  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:25:32.251489  664006 cri.go:89] found id: ""
	I1212 01:25:32.251515  664006 logs.go:282] 0 containers: []
	W1212 01:25:32.251524  664006 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:25:32.251530  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:25:32.251604  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:25:32.276992  664006 cri.go:89] found id: ""
	I1212 01:25:32.277017  664006 logs.go:282] 0 containers: []
	W1212 01:25:32.277026  664006 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:25:32.277032  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:25:32.277093  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:25:32.301540  664006 cri.go:89] found id: ""
	I1212 01:25:32.301565  664006 logs.go:282] 0 containers: []
	W1212 01:25:32.301574  664006 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:25:32.301580  664006 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:25:32.301640  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:25:32.329302  664006 cri.go:89] found id: ""
	I1212 01:25:32.329327  664006 logs.go:282] 0 containers: []
	W1212 01:25:32.329335  664006 logs.go:284] No container was found matching "kindnet"
	I1212 01:25:32.329345  664006 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:25:32.329404  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:25:32.355562  664006 cri.go:89] found id: ""
	I1212 01:25:32.355586  664006 logs.go:282] 0 containers: []
	W1212 01:25:32.355595  664006 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:25:32.355604  664006 logs.go:123] Gathering logs for kubelet ...
	I1212 01:25:32.355635  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:25:32.424752  664006 logs.go:123] Gathering logs for dmesg ...
	I1212 01:25:32.424793  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:25:32.441101  664006 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:25:32.441129  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:25:32.534656  664006 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:25:32.534698  664006 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:25:32.534732  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:25:32.566968  664006 logs.go:123] Gathering logs for container status ...
	I1212 01:25:32.567002  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:25:35.098113  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:25:35.109219  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:25:35.109294  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:25:35.136726  664006 cri.go:89] found id: ""
	I1212 01:25:35.136762  664006 logs.go:282] 0 containers: []
	W1212 01:25:35.136770  664006 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:25:35.136776  664006 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:25:35.136838  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:25:35.165194  664006 cri.go:89] found id: ""
	I1212 01:25:35.165227  664006 logs.go:282] 0 containers: []
	W1212 01:25:35.165235  664006 logs.go:284] No container was found matching "etcd"
	I1212 01:25:35.165245  664006 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:25:35.165315  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:25:35.192576  664006 cri.go:89] found id: ""
	I1212 01:25:35.192601  664006 logs.go:282] 0 containers: []
	W1212 01:25:35.192610  664006 logs.go:284] No container was found matching "coredns"
	I1212 01:25:35.192616  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:25:35.192678  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:25:35.221139  664006 cri.go:89] found id: ""
	I1212 01:25:35.221163  664006 logs.go:282] 0 containers: []
	W1212 01:25:35.221172  664006 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:25:35.221182  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:25:35.221249  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:25:35.247972  664006 cri.go:89] found id: ""
	I1212 01:25:35.247997  664006 logs.go:282] 0 containers: []
	W1212 01:25:35.248006  664006 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:25:35.248011  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:25:35.248078  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:25:35.281222  664006 cri.go:89] found id: ""
	I1212 01:25:35.281249  664006 logs.go:282] 0 containers: []
	W1212 01:25:35.281258  664006 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:25:35.281265  664006 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:25:35.281326  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:25:35.312081  664006 cri.go:89] found id: ""
	I1212 01:25:35.312107  664006 logs.go:282] 0 containers: []
	W1212 01:25:35.312116  664006 logs.go:284] No container was found matching "kindnet"
	I1212 01:25:35.312122  664006 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:25:35.312183  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:25:35.342786  664006 cri.go:89] found id: ""
	I1212 01:25:35.342811  664006 logs.go:282] 0 containers: []
	W1212 01:25:35.342819  664006 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:25:35.342828  664006 logs.go:123] Gathering logs for kubelet ...
	I1212 01:25:35.342839  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:25:35.422743  664006 logs.go:123] Gathering logs for dmesg ...
	I1212 01:25:35.422777  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:25:35.439329  664006 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:25:35.439361  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:25:35.587201  664006 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:25:35.587224  664006 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:25:35.587239  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:25:35.640472  664006 logs.go:123] Gathering logs for container status ...
	I1212 01:25:35.640513  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:25:38.197898  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:25:38.208440  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:25:38.208518  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:25:38.234001  664006 cri.go:89] found id: ""
	I1212 01:25:38.234025  664006 logs.go:282] 0 containers: []
	W1212 01:25:38.234033  664006 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:25:38.234039  664006 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:25:38.234100  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:25:38.264996  664006 cri.go:89] found id: ""
	I1212 01:25:38.265063  664006 logs.go:282] 0 containers: []
	W1212 01:25:38.265084  664006 logs.go:284] No container was found matching "etcd"
	I1212 01:25:38.265100  664006 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:25:38.265191  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:25:38.290259  664006 cri.go:89] found id: ""
	I1212 01:25:38.290284  664006 logs.go:282] 0 containers: []
	W1212 01:25:38.290293  664006 logs.go:284] No container was found matching "coredns"
	I1212 01:25:38.290305  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:25:38.290362  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:25:38.316004  664006 cri.go:89] found id: ""
	I1212 01:25:38.316029  664006 logs.go:282] 0 containers: []
	W1212 01:25:38.316037  664006 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:25:38.316044  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:25:38.316133  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:25:38.346108  664006 cri.go:89] found id: ""
	I1212 01:25:38.346134  664006 logs.go:282] 0 containers: []
	W1212 01:25:38.346142  664006 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:25:38.346148  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:25:38.346207  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:25:38.371806  664006 cri.go:89] found id: ""
	I1212 01:25:38.371829  664006 logs.go:282] 0 containers: []
	W1212 01:25:38.371837  664006 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:25:38.371843  664006 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:25:38.371910  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:25:38.398630  664006 cri.go:89] found id: ""
	I1212 01:25:38.398652  664006 logs.go:282] 0 containers: []
	W1212 01:25:38.398661  664006 logs.go:284] No container was found matching "kindnet"
	I1212 01:25:38.398667  664006 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:25:38.398745  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:25:38.424781  664006 cri.go:89] found id: ""
	I1212 01:25:38.424804  664006 logs.go:282] 0 containers: []
	W1212 01:25:38.424813  664006 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:25:38.424821  664006 logs.go:123] Gathering logs for kubelet ...
	I1212 01:25:38.424836  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:25:38.494299  664006 logs.go:123] Gathering logs for dmesg ...
	I1212 01:25:38.494338  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:25:38.510558  664006 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:25:38.510588  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:25:38.578838  664006 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:25:38.578861  664006 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:25:38.578873  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:25:38.610606  664006 logs.go:123] Gathering logs for container status ...
	I1212 01:25:38.610640  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:25:41.141243  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:25:41.151290  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:25:41.151371  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:25:41.177070  664006 cri.go:89] found id: ""
	I1212 01:25:41.177094  664006 logs.go:282] 0 containers: []
	W1212 01:25:41.177103  664006 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:25:41.177109  664006 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:25:41.177169  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:25:41.207903  664006 cri.go:89] found id: ""
	I1212 01:25:41.207927  664006 logs.go:282] 0 containers: []
	W1212 01:25:41.207936  664006 logs.go:284] No container was found matching "etcd"
	I1212 01:25:41.207941  664006 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:25:41.208001  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:25:41.237895  664006 cri.go:89] found id: ""
	I1212 01:25:41.237930  664006 logs.go:282] 0 containers: []
	W1212 01:25:41.237939  664006 logs.go:284] No container was found matching "coredns"
	I1212 01:25:41.237945  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:25:41.238004  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:25:41.263456  664006 cri.go:89] found id: ""
	I1212 01:25:41.263478  664006 logs.go:282] 0 containers: []
	W1212 01:25:41.263486  664006 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:25:41.263494  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:25:41.263552  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:25:41.288439  664006 cri.go:89] found id: ""
	I1212 01:25:41.288466  664006 logs.go:282] 0 containers: []
	W1212 01:25:41.288475  664006 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:25:41.288481  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:25:41.288539  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:25:41.319535  664006 cri.go:89] found id: ""
	I1212 01:25:41.319556  664006 logs.go:282] 0 containers: []
	W1212 01:25:41.319564  664006 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:25:41.319570  664006 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:25:41.319628  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:25:41.344942  664006 cri.go:89] found id: ""
	I1212 01:25:41.344966  664006 logs.go:282] 0 containers: []
	W1212 01:25:41.344975  664006 logs.go:284] No container was found matching "kindnet"
	I1212 01:25:41.344981  664006 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:25:41.345039  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:25:41.370460  664006 cri.go:89] found id: ""
	I1212 01:25:41.370484  664006 logs.go:282] 0 containers: []
	W1212 01:25:41.370493  664006 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:25:41.370502  664006 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:25:41.370514  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:25:41.437228  664006 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:25:41.437245  664006 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:25:41.437258  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:25:41.468736  664006 logs.go:123] Gathering logs for container status ...
	I1212 01:25:41.468769  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:25:41.496361  664006 logs.go:123] Gathering logs for kubelet ...
	I1212 01:25:41.496389  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:25:41.573181  664006 logs.go:123] Gathering logs for dmesg ...
	I1212 01:25:41.573220  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:25:44.090822  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:25:44.102156  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:25:44.102227  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:25:44.136999  664006 cri.go:89] found id: ""
	I1212 01:25:44.137025  664006 logs.go:282] 0 containers: []
	W1212 01:25:44.137033  664006 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:25:44.137039  664006 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:25:44.137102  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:25:44.169142  664006 cri.go:89] found id: ""
	I1212 01:25:44.169170  664006 logs.go:282] 0 containers: []
	W1212 01:25:44.169178  664006 logs.go:284] No container was found matching "etcd"
	I1212 01:25:44.169185  664006 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:25:44.169243  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:25:44.211187  664006 cri.go:89] found id: ""
	I1212 01:25:44.211210  664006 logs.go:282] 0 containers: []
	W1212 01:25:44.211218  664006 logs.go:284] No container was found matching "coredns"
	I1212 01:25:44.211224  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:25:44.211284  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:25:44.251690  664006 cri.go:89] found id: ""
	I1212 01:25:44.251712  664006 logs.go:282] 0 containers: []
	W1212 01:25:44.251721  664006 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:25:44.251726  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:25:44.251794  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:25:44.300974  664006 cri.go:89] found id: ""
	I1212 01:25:44.300995  664006 logs.go:282] 0 containers: []
	W1212 01:25:44.301004  664006 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:25:44.301011  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:25:44.301074  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:25:44.333305  664006 cri.go:89] found id: ""
	I1212 01:25:44.333327  664006 logs.go:282] 0 containers: []
	W1212 01:25:44.333336  664006 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:25:44.333342  664006 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:25:44.333407  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:25:44.366419  664006 cri.go:89] found id: ""
	I1212 01:25:44.366440  664006 logs.go:282] 0 containers: []
	W1212 01:25:44.366448  664006 logs.go:284] No container was found matching "kindnet"
	I1212 01:25:44.366453  664006 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:25:44.366510  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:25:44.391969  664006 cri.go:89] found id: ""
	I1212 01:25:44.391989  664006 logs.go:282] 0 containers: []
	W1212 01:25:44.391998  664006 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:25:44.392006  664006 logs.go:123] Gathering logs for dmesg ...
	I1212 01:25:44.392023  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:25:44.408773  664006 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:25:44.408857  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:25:44.475128  664006 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:25:44.475147  664006 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:25:44.475160  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:25:44.506658  664006 logs.go:123] Gathering logs for container status ...
	I1212 01:25:44.506783  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:25:44.540673  664006 logs.go:123] Gathering logs for kubelet ...
	I1212 01:25:44.540699  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:25:47.114839  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:25:47.124911  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:25:47.124979  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:25:47.150644  664006 cri.go:89] found id: ""
	I1212 01:25:47.150672  664006 logs.go:282] 0 containers: []
	W1212 01:25:47.150733  664006 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:25:47.150740  664006 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:25:47.150823  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:25:47.175736  664006 cri.go:89] found id: ""
	I1212 01:25:47.175762  664006 logs.go:282] 0 containers: []
	W1212 01:25:47.175773  664006 logs.go:284] No container was found matching "etcd"
	I1212 01:25:47.175779  664006 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:25:47.175845  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:25:47.201392  664006 cri.go:89] found id: ""
	I1212 01:25:47.201419  664006 logs.go:282] 0 containers: []
	W1212 01:25:47.201429  664006 logs.go:284] No container was found matching "coredns"
	I1212 01:25:47.201435  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:25:47.201499  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:25:47.235101  664006 cri.go:89] found id: ""
	I1212 01:25:47.235124  664006 logs.go:282] 0 containers: []
	W1212 01:25:47.235133  664006 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:25:47.235139  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:25:47.235200  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:25:47.260877  664006 cri.go:89] found id: ""
	I1212 01:25:47.260902  664006 logs.go:282] 0 containers: []
	W1212 01:25:47.260911  664006 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:25:47.260917  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:25:47.260976  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:25:47.288939  664006 cri.go:89] found id: ""
	I1212 01:25:47.288964  664006 logs.go:282] 0 containers: []
	W1212 01:25:47.288973  664006 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:25:47.288980  664006 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:25:47.289045  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:25:47.315948  664006 cri.go:89] found id: ""
	I1212 01:25:47.315973  664006 logs.go:282] 0 containers: []
	W1212 01:25:47.315982  664006 logs.go:284] No container was found matching "kindnet"
	I1212 01:25:47.315988  664006 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:25:47.316060  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:25:47.343594  664006 cri.go:89] found id: ""
	I1212 01:25:47.343619  664006 logs.go:282] 0 containers: []
	W1212 01:25:47.343628  664006 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:25:47.343637  664006 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:25:47.343648  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:25:47.375210  664006 logs.go:123] Gathering logs for container status ...
	I1212 01:25:47.375244  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:25:47.403453  664006 logs.go:123] Gathering logs for kubelet ...
	I1212 01:25:47.403481  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:25:47.471801  664006 logs.go:123] Gathering logs for dmesg ...
	I1212 01:25:47.471838  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:25:47.488018  664006 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:25:47.488044  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:25:47.560428  664006 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:25:50.060762  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:25:50.071628  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:25:50.071703  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:25:50.102047  664006 cri.go:89] found id: ""
	I1212 01:25:50.102071  664006 logs.go:282] 0 containers: []
	W1212 01:25:50.102079  664006 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:25:50.102085  664006 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:25:50.102146  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:25:50.127795  664006 cri.go:89] found id: ""
	I1212 01:25:50.127820  664006 logs.go:282] 0 containers: []
	W1212 01:25:50.127829  664006 logs.go:284] No container was found matching "etcd"
	I1212 01:25:50.127835  664006 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:25:50.127894  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:25:50.154229  664006 cri.go:89] found id: ""
	I1212 01:25:50.154253  664006 logs.go:282] 0 containers: []
	W1212 01:25:50.154262  664006 logs.go:284] No container was found matching "coredns"
	I1212 01:25:50.154270  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:25:50.154330  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:25:50.183477  664006 cri.go:89] found id: ""
	I1212 01:25:50.183502  664006 logs.go:282] 0 containers: []
	W1212 01:25:50.183511  664006 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:25:50.183518  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:25:50.183580  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:25:50.209973  664006 cri.go:89] found id: ""
	I1212 01:25:50.209995  664006 logs.go:282] 0 containers: []
	W1212 01:25:50.210003  664006 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:25:50.210010  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:25:50.210074  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:25:50.239457  664006 cri.go:89] found id: ""
	I1212 01:25:50.239483  664006 logs.go:282] 0 containers: []
	W1212 01:25:50.239492  664006 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:25:50.239499  664006 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:25:50.239612  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:25:50.265275  664006 cri.go:89] found id: ""
	I1212 01:25:50.265301  664006 logs.go:282] 0 containers: []
	W1212 01:25:50.265310  664006 logs.go:284] No container was found matching "kindnet"
	I1212 01:25:50.265315  664006 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:25:50.265375  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:25:50.291514  664006 cri.go:89] found id: ""
	I1212 01:25:50.291539  664006 logs.go:282] 0 containers: []
	W1212 01:25:50.291547  664006 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:25:50.291562  664006 logs.go:123] Gathering logs for kubelet ...
	I1212 01:25:50.291574  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:25:50.360765  664006 logs.go:123] Gathering logs for dmesg ...
	I1212 01:25:50.360799  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:25:50.376873  664006 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:25:50.376901  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:25:50.446002  664006 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:25:50.446022  664006 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:25:50.446062  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:25:50.479114  664006 logs.go:123] Gathering logs for container status ...
	I1212 01:25:50.479149  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:25:53.011281  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:25:53.039203  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:25:53.039274  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:25:53.093934  664006 cri.go:89] found id: ""
	I1212 01:25:53.093957  664006 logs.go:282] 0 containers: []
	W1212 01:25:53.093965  664006 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:25:53.093971  664006 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:25:53.094030  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:25:53.121576  664006 cri.go:89] found id: ""
	I1212 01:25:53.121598  664006 logs.go:282] 0 containers: []
	W1212 01:25:53.121607  664006 logs.go:284] No container was found matching "etcd"
	I1212 01:25:53.121613  664006 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:25:53.121671  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:25:53.149854  664006 cri.go:89] found id: ""
	I1212 01:25:53.149884  664006 logs.go:282] 0 containers: []
	W1212 01:25:53.149894  664006 logs.go:284] No container was found matching "coredns"
	I1212 01:25:53.149900  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:25:53.149960  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:25:53.179541  664006 cri.go:89] found id: ""
	I1212 01:25:53.179562  664006 logs.go:282] 0 containers: []
	W1212 01:25:53.179571  664006 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:25:53.179577  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:25:53.179639  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:25:53.209729  664006 cri.go:89] found id: ""
	I1212 01:25:53.209752  664006 logs.go:282] 0 containers: []
	W1212 01:25:53.209760  664006 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:25:53.209766  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:25:53.209832  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:25:53.235462  664006 cri.go:89] found id: ""
	I1212 01:25:53.235486  664006 logs.go:282] 0 containers: []
	W1212 01:25:53.235495  664006 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:25:53.235502  664006 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:25:53.235560  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:25:53.260964  664006 cri.go:89] found id: ""
	I1212 01:25:53.260988  664006 logs.go:282] 0 containers: []
	W1212 01:25:53.260997  664006 logs.go:284] No container was found matching "kindnet"
	I1212 01:25:53.261003  664006 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:25:53.261061  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:25:53.286337  664006 cri.go:89] found id: ""
	I1212 01:25:53.286362  664006 logs.go:282] 0 containers: []
	W1212 01:25:53.286370  664006 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:25:53.286378  664006 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:25:53.286389  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:25:53.317273  664006 logs.go:123] Gathering logs for container status ...
	I1212 01:25:53.317310  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:25:53.348829  664006 logs.go:123] Gathering logs for kubelet ...
	I1212 01:25:53.348855  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:25:53.415840  664006 logs.go:123] Gathering logs for dmesg ...
	I1212 01:25:53.415875  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:25:53.431586  664006 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:25:53.431613  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:25:53.494816  664006 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:25:55.995025  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:25:56.033590  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:25:56.033718  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:25:56.123583  664006 cri.go:89] found id: ""
	I1212 01:25:56.123605  664006 logs.go:282] 0 containers: []
	W1212 01:25:56.123623  664006 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:25:56.123630  664006 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:25:56.123699  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:25:56.165857  664006 cri.go:89] found id: ""
	I1212 01:25:56.165878  664006 logs.go:282] 0 containers: []
	W1212 01:25:56.165887  664006 logs.go:284] No container was found matching "etcd"
	I1212 01:25:56.165893  664006 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:25:56.165954  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:25:56.203466  664006 cri.go:89] found id: ""
	I1212 01:25:56.203488  664006 logs.go:282] 0 containers: []
	W1212 01:25:56.203497  664006 logs.go:284] No container was found matching "coredns"
	I1212 01:25:56.203504  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:25:56.203566  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:25:56.268117  664006 cri.go:89] found id: ""
	I1212 01:25:56.268146  664006 logs.go:282] 0 containers: []
	W1212 01:25:56.268155  664006 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:25:56.268162  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:25:56.268249  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:25:56.322455  664006 cri.go:89] found id: ""
	I1212 01:25:56.322489  664006 logs.go:282] 0 containers: []
	W1212 01:25:56.322506  664006 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:25:56.322516  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:25:56.322613  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:25:56.375304  664006 cri.go:89] found id: ""
	I1212 01:25:56.375326  664006 logs.go:282] 0 containers: []
	W1212 01:25:56.375334  664006 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:25:56.375342  664006 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:25:56.375412  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:25:56.411644  664006 cri.go:89] found id: ""
	I1212 01:25:56.411665  664006 logs.go:282] 0 containers: []
	W1212 01:25:56.411673  664006 logs.go:284] No container was found matching "kindnet"
	I1212 01:25:56.411679  664006 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:25:56.411736  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:25:56.448588  664006 cri.go:89] found id: ""
	I1212 01:25:56.448610  664006 logs.go:282] 0 containers: []
	W1212 01:25:56.448619  664006 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:25:56.448628  664006 logs.go:123] Gathering logs for kubelet ...
	I1212 01:25:56.448639  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:25:56.538003  664006 logs.go:123] Gathering logs for dmesg ...
	I1212 01:25:56.538122  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:25:56.554368  664006 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:25:56.554393  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:25:56.633555  664006 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:25:56.633574  664006 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:25:56.633587  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:25:56.664772  664006 logs.go:123] Gathering logs for container status ...
	I1212 01:25:56.664804  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:25:59.195067  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:25:59.209214  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:25:59.209283  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:25:59.287352  664006 cri.go:89] found id: ""
	I1212 01:25:59.287373  664006 logs.go:282] 0 containers: []
	W1212 01:25:59.287381  664006 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:25:59.287388  664006 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:25:59.287447  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:25:59.347109  664006 cri.go:89] found id: ""
	I1212 01:25:59.347180  664006 logs.go:282] 0 containers: []
	W1212 01:25:59.347201  664006 logs.go:284] No container was found matching "etcd"
	I1212 01:25:59.347218  664006 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:25:59.347305  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:25:59.383690  664006 cri.go:89] found id: ""
	I1212 01:25:59.383762  664006 logs.go:282] 0 containers: []
	W1212 01:25:59.383783  664006 logs.go:284] No container was found matching "coredns"
	I1212 01:25:59.383802  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:25:59.383891  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:25:59.417560  664006 cri.go:89] found id: ""
	I1212 01:25:59.417628  664006 logs.go:282] 0 containers: []
	W1212 01:25:59.417650  664006 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:25:59.417668  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:25:59.417751  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:25:59.448099  664006 cri.go:89] found id: ""
	I1212 01:25:59.448169  664006 logs.go:282] 0 containers: []
	W1212 01:25:59.448192  664006 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:25:59.448210  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:25:59.448312  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:25:59.481593  664006 cri.go:89] found id: ""
	I1212 01:25:59.481666  664006 logs.go:282] 0 containers: []
	W1212 01:25:59.481688  664006 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:25:59.481944  664006 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:25:59.482049  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:25:59.519598  664006 cri.go:89] found id: ""
	I1212 01:25:59.519669  664006 logs.go:282] 0 containers: []
	W1212 01:25:59.519700  664006 logs.go:284] No container was found matching "kindnet"
	I1212 01:25:59.519718  664006 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:25:59.519826  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:25:59.559021  664006 cri.go:89] found id: ""
	I1212 01:25:59.559092  664006 logs.go:282] 0 containers: []
	W1212 01:25:59.559115  664006 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:25:59.559136  664006 logs.go:123] Gathering logs for container status ...
	I1212 01:25:59.559174  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:25:59.590352  664006 logs.go:123] Gathering logs for kubelet ...
	I1212 01:25:59.590435  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:25:59.665582  664006 logs.go:123] Gathering logs for dmesg ...
	I1212 01:25:59.665665  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:25:59.685690  664006 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:25:59.685775  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:25:59.787240  664006 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:25:59.787321  664006 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:25:59.787349  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:26:02.333237  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:26:02.344529  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:26:02.344600  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:26:02.370297  664006 cri.go:89] found id: ""
	I1212 01:26:02.370320  664006 logs.go:282] 0 containers: []
	W1212 01:26:02.370329  664006 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:26:02.370336  664006 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:26:02.370394  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:26:02.395472  664006 cri.go:89] found id: ""
	I1212 01:26:02.395498  664006 logs.go:282] 0 containers: []
	W1212 01:26:02.395507  664006 logs.go:284] No container was found matching "etcd"
	I1212 01:26:02.395513  664006 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:26:02.395578  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:26:02.425602  664006 cri.go:89] found id: ""
	I1212 01:26:02.425627  664006 logs.go:282] 0 containers: []
	W1212 01:26:02.425636  664006 logs.go:284] No container was found matching "coredns"
	I1212 01:26:02.425641  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:26:02.425700  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:26:02.453402  664006 cri.go:89] found id: ""
	I1212 01:26:02.453426  664006 logs.go:282] 0 containers: []
	W1212 01:26:02.453435  664006 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:26:02.453441  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:26:02.453501  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:26:02.494647  664006 cri.go:89] found id: ""
	I1212 01:26:02.494669  664006 logs.go:282] 0 containers: []
	W1212 01:26:02.494676  664006 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:26:02.494706  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:26:02.494769  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:26:02.530621  664006 cri.go:89] found id: ""
	I1212 01:26:02.530715  664006 logs.go:282] 0 containers: []
	W1212 01:26:02.530739  664006 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:26:02.530770  664006 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:26:02.530867  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:26:02.565624  664006 cri.go:89] found id: ""
	I1212 01:26:02.565645  664006 logs.go:282] 0 containers: []
	W1212 01:26:02.565654  664006 logs.go:284] No container was found matching "kindnet"
	I1212 01:26:02.565660  664006 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:26:02.565718  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:26:02.593399  664006 cri.go:89] found id: ""
	I1212 01:26:02.593421  664006 logs.go:282] 0 containers: []
	W1212 01:26:02.593429  664006 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:26:02.593438  664006 logs.go:123] Gathering logs for kubelet ...
	I1212 01:26:02.593452  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:26:02.662225  664006 logs.go:123] Gathering logs for dmesg ...
	I1212 01:26:02.662260  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:26:02.678393  664006 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:26:02.678421  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:26:02.745906  664006 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:26:02.745927  664006 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:26:02.745940  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:26:02.778413  664006 logs.go:123] Gathering logs for container status ...
	I1212 01:26:02.778449  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:26:05.310902  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:26:05.323380  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:26:05.323482  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:26:05.352448  664006 cri.go:89] found id: ""
	I1212 01:26:05.352471  664006 logs.go:282] 0 containers: []
	W1212 01:26:05.352479  664006 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:26:05.352486  664006 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:26:05.352548  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:26:05.378420  664006 cri.go:89] found id: ""
	I1212 01:26:05.378451  664006 logs.go:282] 0 containers: []
	W1212 01:26:05.378462  664006 logs.go:284] No container was found matching "etcd"
	I1212 01:26:05.378469  664006 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:26:05.378538  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:26:05.403605  664006 cri.go:89] found id: ""
	I1212 01:26:05.403638  664006 logs.go:282] 0 containers: []
	W1212 01:26:05.403647  664006 logs.go:284] No container was found matching "coredns"
	I1212 01:26:05.403653  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:26:05.403725  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:26:05.429543  664006 cri.go:89] found id: ""
	I1212 01:26:05.429628  664006 logs.go:282] 0 containers: []
	W1212 01:26:05.429651  664006 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:26:05.429670  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:26:05.429758  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:26:05.455490  664006 cri.go:89] found id: ""
	I1212 01:26:05.455515  664006 logs.go:282] 0 containers: []
	W1212 01:26:05.455523  664006 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:26:05.455530  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:26:05.455612  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:26:05.499726  664006 cri.go:89] found id: ""
	I1212 01:26:05.499751  664006 logs.go:282] 0 containers: []
	W1212 01:26:05.499759  664006 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:26:05.499766  664006 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:26:05.499836  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:26:05.543162  664006 cri.go:89] found id: ""
	I1212 01:26:05.543184  664006 logs.go:282] 0 containers: []
	W1212 01:26:05.543193  664006 logs.go:284] No container was found matching "kindnet"
	I1212 01:26:05.543199  664006 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:26:05.543261  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:26:05.568782  664006 cri.go:89] found id: ""
	I1212 01:26:05.568808  664006 logs.go:282] 0 containers: []
	W1212 01:26:05.568816  664006 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:26:05.568825  664006 logs.go:123] Gathering logs for kubelet ...
	I1212 01:26:05.568836  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:26:05.639775  664006 logs.go:123] Gathering logs for dmesg ...
	I1212 01:26:05.639812  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:26:05.656489  664006 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:26:05.656514  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:26:05.717766  664006 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:26:05.717830  664006 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:26:05.717849  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:26:05.749474  664006 logs.go:123] Gathering logs for container status ...
	I1212 01:26:05.749507  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:26:08.278537  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:26:08.288332  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:26:08.288405  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:26:08.314118  664006 cri.go:89] found id: ""
	I1212 01:26:08.314143  664006 logs.go:282] 0 containers: []
	W1212 01:26:08.314152  664006 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:26:08.314173  664006 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:26:08.314232  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:26:08.342261  664006 cri.go:89] found id: ""
	I1212 01:26:08.342284  664006 logs.go:282] 0 containers: []
	W1212 01:26:08.342292  664006 logs.go:284] No container was found matching "etcd"
	I1212 01:26:08.342298  664006 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:26:08.342356  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:26:08.371156  664006 cri.go:89] found id: ""
	I1212 01:26:08.371180  664006 logs.go:282] 0 containers: []
	W1212 01:26:08.371188  664006 logs.go:284] No container was found matching "coredns"
	I1212 01:26:08.371194  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:26:08.371252  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:26:08.396709  664006 cri.go:89] found id: ""
	I1212 01:26:08.396732  664006 logs.go:282] 0 containers: []
	W1212 01:26:08.396740  664006 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:26:08.396746  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:26:08.396813  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:26:08.426188  664006 cri.go:89] found id: ""
	I1212 01:26:08.426213  664006 logs.go:282] 0 containers: []
	W1212 01:26:08.426223  664006 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:26:08.426229  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:26:08.426292  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:26:08.454343  664006 cri.go:89] found id: ""
	I1212 01:26:08.454368  664006 logs.go:282] 0 containers: []
	W1212 01:26:08.454376  664006 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:26:08.454382  664006 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:26:08.454443  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:26:08.498571  664006 cri.go:89] found id: ""
	I1212 01:26:08.498596  664006 logs.go:282] 0 containers: []
	W1212 01:26:08.498604  664006 logs.go:284] No container was found matching "kindnet"
	I1212 01:26:08.498610  664006 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:26:08.498678  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:26:08.533270  664006 cri.go:89] found id: ""
	I1212 01:26:08.533297  664006 logs.go:282] 0 containers: []
	W1212 01:26:08.533306  664006 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:26:08.533315  664006 logs.go:123] Gathering logs for kubelet ...
	I1212 01:26:08.533328  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:26:08.613825  664006 logs.go:123] Gathering logs for dmesg ...
	I1212 01:26:08.613911  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:26:08.630627  664006 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:26:08.630657  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:26:08.712922  664006 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:26:08.712995  664006 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:26:08.713022  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:26:08.746731  664006 logs.go:123] Gathering logs for container status ...
	I1212 01:26:08.746762  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:26:11.278761  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:26:11.288961  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:26:11.289053  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:26:11.313997  664006 cri.go:89] found id: ""
	I1212 01:26:11.314022  664006 logs.go:282] 0 containers: []
	W1212 01:26:11.314031  664006 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:26:11.314038  664006 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:26:11.314102  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:26:11.339333  664006 cri.go:89] found id: ""
	I1212 01:26:11.339358  664006 logs.go:282] 0 containers: []
	W1212 01:26:11.339366  664006 logs.go:284] No container was found matching "etcd"
	I1212 01:26:11.339372  664006 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:26:11.339431  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:26:11.365921  664006 cri.go:89] found id: ""
	I1212 01:26:11.365946  664006 logs.go:282] 0 containers: []
	W1212 01:26:11.365956  664006 logs.go:284] No container was found matching "coredns"
	I1212 01:26:11.365962  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:26:11.366026  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:26:11.397840  664006 cri.go:89] found id: ""
	I1212 01:26:11.397864  664006 logs.go:282] 0 containers: []
	W1212 01:26:11.397872  664006 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:26:11.397878  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:26:11.397935  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:26:11.423135  664006 cri.go:89] found id: ""
	I1212 01:26:11.423157  664006 logs.go:282] 0 containers: []
	W1212 01:26:11.423166  664006 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:26:11.423171  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:26:11.423229  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:26:11.449467  664006 cri.go:89] found id: ""
	I1212 01:26:11.449491  664006 logs.go:282] 0 containers: []
	W1212 01:26:11.449500  664006 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:26:11.449506  664006 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:26:11.449566  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:26:11.484425  664006 cri.go:89] found id: ""
	I1212 01:26:11.484453  664006 logs.go:282] 0 containers: []
	W1212 01:26:11.484462  664006 logs.go:284] No container was found matching "kindnet"
	I1212 01:26:11.484468  664006 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:26:11.484530  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:26:11.514513  664006 cri.go:89] found id: ""
	I1212 01:26:11.514537  664006 logs.go:282] 0 containers: []
	W1212 01:26:11.514546  664006 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:26:11.514554  664006 logs.go:123] Gathering logs for container status ...
	I1212 01:26:11.514569  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:26:11.551966  664006 logs.go:123] Gathering logs for kubelet ...
	I1212 01:26:11.551994  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:26:11.622340  664006 logs.go:123] Gathering logs for dmesg ...
	I1212 01:26:11.622373  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:26:11.638724  664006 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:26:11.638751  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:26:11.701809  664006 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:26:11.701829  664006 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:26:11.701864  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:26:14.234098  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:26:14.244236  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:26:14.244316  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:26:14.272750  664006 cri.go:89] found id: ""
	I1212 01:26:14.272775  664006 logs.go:282] 0 containers: []
	W1212 01:26:14.272784  664006 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:26:14.272790  664006 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:26:14.272852  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:26:14.301902  664006 cri.go:89] found id: ""
	I1212 01:26:14.301926  664006 logs.go:282] 0 containers: []
	W1212 01:26:14.301935  664006 logs.go:284] No container was found matching "etcd"
	I1212 01:26:14.301941  664006 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:26:14.302000  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:26:14.330108  664006 cri.go:89] found id: ""
	I1212 01:26:14.330129  664006 logs.go:282] 0 containers: []
	W1212 01:26:14.330138  664006 logs.go:284] No container was found matching "coredns"
	I1212 01:26:14.330144  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:26:14.330204  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:26:14.355512  664006 cri.go:89] found id: ""
	I1212 01:26:14.355535  664006 logs.go:282] 0 containers: []
	W1212 01:26:14.355543  664006 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:26:14.355550  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:26:14.355617  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:26:14.382406  664006 cri.go:89] found id: ""
	I1212 01:26:14.382427  664006 logs.go:282] 0 containers: []
	W1212 01:26:14.382435  664006 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:26:14.382441  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:26:14.382500  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:26:14.407873  664006 cri.go:89] found id: ""
	I1212 01:26:14.407895  664006 logs.go:282] 0 containers: []
	W1212 01:26:14.407904  664006 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:26:14.407910  664006 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:26:14.407971  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:26:14.432717  664006 cri.go:89] found id: ""
	I1212 01:26:14.432739  664006 logs.go:282] 0 containers: []
	W1212 01:26:14.432747  664006 logs.go:284] No container was found matching "kindnet"
	I1212 01:26:14.432753  664006 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:26:14.432818  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:26:14.458242  664006 cri.go:89] found id: ""
	I1212 01:26:14.458312  664006 logs.go:282] 0 containers: []
	W1212 01:26:14.458336  664006 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:26:14.458356  664006 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:26:14.458380  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:26:14.492654  664006 logs.go:123] Gathering logs for container status ...
	I1212 01:26:14.492754  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:26:14.550353  664006 logs.go:123] Gathering logs for kubelet ...
	I1212 01:26:14.550378  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:26:14.621419  664006 logs.go:123] Gathering logs for dmesg ...
	I1212 01:26:14.621457  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:26:14.648110  664006 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:26:14.648138  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:26:14.741041  664006 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:26:17.242529  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:26:17.252604  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:26:17.252672  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:26:17.278411  664006 cri.go:89] found id: ""
	I1212 01:26:17.278433  664006 logs.go:282] 0 containers: []
	W1212 01:26:17.278442  664006 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:26:17.278448  664006 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:26:17.278508  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:26:17.303778  664006 cri.go:89] found id: ""
	I1212 01:26:17.303804  664006 logs.go:282] 0 containers: []
	W1212 01:26:17.303813  664006 logs.go:284] No container was found matching "etcd"
	I1212 01:26:17.303819  664006 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:26:17.303887  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:26:17.328957  664006 cri.go:89] found id: ""
	I1212 01:26:17.328982  664006 logs.go:282] 0 containers: []
	W1212 01:26:17.328991  664006 logs.go:284] No container was found matching "coredns"
	I1212 01:26:17.328997  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:26:17.329056  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:26:17.355616  664006 cri.go:89] found id: ""
	I1212 01:26:17.355638  664006 logs.go:282] 0 containers: []
	W1212 01:26:17.355647  664006 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:26:17.355653  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:26:17.355710  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:26:17.380820  664006 cri.go:89] found id: ""
	I1212 01:26:17.380845  664006 logs.go:282] 0 containers: []
	W1212 01:26:17.380854  664006 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:26:17.380861  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:26:17.380921  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:26:17.408632  664006 cri.go:89] found id: ""
	I1212 01:26:17.408705  664006 logs.go:282] 0 containers: []
	W1212 01:26:17.408720  664006 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:26:17.408727  664006 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:26:17.408788  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:26:17.434984  664006 cri.go:89] found id: ""
	I1212 01:26:17.435008  664006 logs.go:282] 0 containers: []
	W1212 01:26:17.435017  664006 logs.go:284] No container was found matching "kindnet"
	I1212 01:26:17.435023  664006 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:26:17.435084  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:26:17.462182  664006 cri.go:89] found id: ""
	I1212 01:26:17.462207  664006 logs.go:282] 0 containers: []
	W1212 01:26:17.462216  664006 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:26:17.462224  664006 logs.go:123] Gathering logs for container status ...
	I1212 01:26:17.462237  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:26:17.491491  664006 logs.go:123] Gathering logs for kubelet ...
	I1212 01:26:17.491519  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:26:17.572109  664006 logs.go:123] Gathering logs for dmesg ...
	I1212 01:26:17.572144  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:26:17.588180  664006 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:26:17.588216  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:26:17.652718  664006 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:26:17.652738  664006 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:26:17.652750  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:26:20.185161  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:26:20.195603  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:26:20.195681  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:26:20.221669  664006 cri.go:89] found id: ""
	I1212 01:26:20.221691  664006 logs.go:282] 0 containers: []
	W1212 01:26:20.221700  664006 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:26:20.221706  664006 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:26:20.221767  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:26:20.246751  664006 cri.go:89] found id: ""
	I1212 01:26:20.246774  664006 logs.go:282] 0 containers: []
	W1212 01:26:20.246783  664006 logs.go:284] No container was found matching "etcd"
	I1212 01:26:20.246789  664006 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:26:20.246847  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:26:20.272466  664006 cri.go:89] found id: ""
	I1212 01:26:20.272489  664006 logs.go:282] 0 containers: []
	W1212 01:26:20.272497  664006 logs.go:284] No container was found matching "coredns"
	I1212 01:26:20.272503  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:26:20.272561  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:26:20.296920  664006 cri.go:89] found id: ""
	I1212 01:26:20.296942  664006 logs.go:282] 0 containers: []
	W1212 01:26:20.296950  664006 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:26:20.296956  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:26:20.297013  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:26:20.321864  664006 cri.go:89] found id: ""
	I1212 01:26:20.321887  664006 logs.go:282] 0 containers: []
	W1212 01:26:20.321895  664006 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:26:20.321901  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:26:20.321965  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:26:20.346612  664006 cri.go:89] found id: ""
	I1212 01:26:20.346749  664006 logs.go:282] 0 containers: []
	W1212 01:26:20.346775  664006 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:26:20.346793  664006 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:26:20.346868  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:26:20.371607  664006 cri.go:89] found id: ""
	I1212 01:26:20.371630  664006 logs.go:282] 0 containers: []
	W1212 01:26:20.371639  664006 logs.go:284] No container was found matching "kindnet"
	I1212 01:26:20.371644  664006 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:26:20.371700  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:26:20.395951  664006 cri.go:89] found id: ""
	I1212 01:26:20.395972  664006 logs.go:282] 0 containers: []
	W1212 01:26:20.395980  664006 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:26:20.395988  664006 logs.go:123] Gathering logs for container status ...
	I1212 01:26:20.395998  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:26:20.426037  664006 logs.go:123] Gathering logs for kubelet ...
	I1212 01:26:20.426100  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:26:20.495089  664006 logs.go:123] Gathering logs for dmesg ...
	I1212 01:26:20.495130  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:26:20.511842  664006 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:26:20.511871  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:26:20.587170  664006 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:26:20.587188  664006 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:26:20.587200  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:26:23.118314  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:26:23.128590  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:26:23.128660  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:26:23.154403  664006 cri.go:89] found id: ""
	I1212 01:26:23.154427  664006 logs.go:282] 0 containers: []
	W1212 01:26:23.154436  664006 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:26:23.154442  664006 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:26:23.154508  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:26:23.181550  664006 cri.go:89] found id: ""
	I1212 01:26:23.181575  664006 logs.go:282] 0 containers: []
	W1212 01:26:23.181584  664006 logs.go:284] No container was found matching "etcd"
	I1212 01:26:23.181590  664006 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:26:23.181652  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:26:23.210300  664006 cri.go:89] found id: ""
	I1212 01:26:23.210327  664006 logs.go:282] 0 containers: []
	W1212 01:26:23.210335  664006 logs.go:284] No container was found matching "coredns"
	I1212 01:26:23.210341  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:26:23.210401  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:26:23.235299  664006 cri.go:89] found id: ""
	I1212 01:26:23.235322  664006 logs.go:282] 0 containers: []
	W1212 01:26:23.235330  664006 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:26:23.235336  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:26:23.235400  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:26:23.262295  664006 cri.go:89] found id: ""
	I1212 01:26:23.262355  664006 logs.go:282] 0 containers: []
	W1212 01:26:23.262378  664006 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:26:23.262396  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:26:23.262476  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:26:23.288212  664006 cri.go:89] found id: ""
	I1212 01:26:23.288238  664006 logs.go:282] 0 containers: []
	W1212 01:26:23.288247  664006 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:26:23.288254  664006 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:26:23.288333  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:26:23.314064  664006 cri.go:89] found id: ""
	I1212 01:26:23.314088  664006 logs.go:282] 0 containers: []
	W1212 01:26:23.314097  664006 logs.go:284] No container was found matching "kindnet"
	I1212 01:26:23.314103  664006 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:26:23.314162  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:26:23.343469  664006 cri.go:89] found id: ""
	I1212 01:26:23.343493  664006 logs.go:282] 0 containers: []
	W1212 01:26:23.343501  664006 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:26:23.343510  664006 logs.go:123] Gathering logs for dmesg ...
	I1212 01:26:23.343522  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:26:23.359626  664006 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:26:23.359654  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:26:23.427015  664006 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:26:23.427036  664006 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:26:23.427051  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:26:23.457891  664006 logs.go:123] Gathering logs for container status ...
	I1212 01:26:23.457923  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:26:23.496268  664006 logs.go:123] Gathering logs for kubelet ...
	I1212 01:26:23.496348  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:26:26.078833  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:26:26.090312  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:26:26.090400  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:26:26.124634  664006 cri.go:89] found id: ""
	I1212 01:26:26.124662  664006 logs.go:282] 0 containers: []
	W1212 01:26:26.124676  664006 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:26:26.124683  664006 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:26:26.124747  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:26:26.156927  664006 cri.go:89] found id: ""
	I1212 01:26:26.156949  664006 logs.go:282] 0 containers: []
	W1212 01:26:26.156958  664006 logs.go:284] No container was found matching "etcd"
	I1212 01:26:26.156964  664006 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:26:26.157026  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:26:26.183369  664006 cri.go:89] found id: ""
	I1212 01:26:26.183404  664006 logs.go:282] 0 containers: []
	W1212 01:26:26.183413  664006 logs.go:284] No container was found matching "coredns"
	I1212 01:26:26.183419  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:26:26.183487  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:26:26.209467  664006 cri.go:89] found id: ""
	I1212 01:26:26.209489  664006 logs.go:282] 0 containers: []
	W1212 01:26:26.209497  664006 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:26:26.209503  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:26:26.209561  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:26:26.239653  664006 cri.go:89] found id: ""
	I1212 01:26:26.239676  664006 logs.go:282] 0 containers: []
	W1212 01:26:26.239684  664006 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:26:26.239696  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:26:26.239756  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:26:26.267681  664006 cri.go:89] found id: ""
	I1212 01:26:26.267712  664006 logs.go:282] 0 containers: []
	W1212 01:26:26.267721  664006 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:26:26.267728  664006 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:26:26.267796  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:26:26.293798  664006 cri.go:89] found id: ""
	I1212 01:26:26.293824  664006 logs.go:282] 0 containers: []
	W1212 01:26:26.293832  664006 logs.go:284] No container was found matching "kindnet"
	I1212 01:26:26.293838  664006 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:26:26.293895  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:26:26.318219  664006 cri.go:89] found id: ""
	I1212 01:26:26.318245  664006 logs.go:282] 0 containers: []
	W1212 01:26:26.318254  664006 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:26:26.318263  664006 logs.go:123] Gathering logs for kubelet ...
	I1212 01:26:26.318275  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:26:26.389298  664006 logs.go:123] Gathering logs for dmesg ...
	I1212 01:26:26.389331  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:26:26.405590  664006 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:26:26.405618  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:26:26.482941  664006 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:26:26.482963  664006 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:26:26.482975  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:26:26.516807  664006 logs.go:123] Gathering logs for container status ...
	I1212 01:26:26.516840  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:26:29.058428  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:26:29.068479  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:26:29.068549  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:26:29.114064  664006 cri.go:89] found id: ""
	I1212 01:26:29.114098  664006 logs.go:282] 0 containers: []
	W1212 01:26:29.114107  664006 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:26:29.114113  664006 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:26:29.114217  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:26:29.140876  664006 cri.go:89] found id: ""
	I1212 01:26:29.140898  664006 logs.go:282] 0 containers: []
	W1212 01:26:29.140907  664006 logs.go:284] No container was found matching "etcd"
	I1212 01:26:29.140913  664006 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:26:29.140974  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:26:29.170275  664006 cri.go:89] found id: ""
	I1212 01:26:29.170296  664006 logs.go:282] 0 containers: []
	W1212 01:26:29.170306  664006 logs.go:284] No container was found matching "coredns"
	I1212 01:26:29.170312  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:26:29.170370  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:26:29.195779  664006 cri.go:89] found id: ""
	I1212 01:26:29.195800  664006 logs.go:282] 0 containers: []
	W1212 01:26:29.195808  664006 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:26:29.195814  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:26:29.195870  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:26:29.219659  664006 cri.go:89] found id: ""
	I1212 01:26:29.219685  664006 logs.go:282] 0 containers: []
	W1212 01:26:29.219693  664006 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:26:29.219699  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:26:29.219757  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:26:29.244140  664006 cri.go:89] found id: ""
	I1212 01:26:29.244165  664006 logs.go:282] 0 containers: []
	W1212 01:26:29.244174  664006 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:26:29.244181  664006 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:26:29.244242  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:26:29.272439  664006 cri.go:89] found id: ""
	I1212 01:26:29.272464  664006 logs.go:282] 0 containers: []
	W1212 01:26:29.272472  664006 logs.go:284] No container was found matching "kindnet"
	I1212 01:26:29.272478  664006 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:26:29.272541  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:26:29.298035  664006 cri.go:89] found id: ""
	I1212 01:26:29.298058  664006 logs.go:282] 0 containers: []
	W1212 01:26:29.298067  664006 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:26:29.298076  664006 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:26:29.298087  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:26:29.329154  664006 logs.go:123] Gathering logs for container status ...
	I1212 01:26:29.329191  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:26:29.359830  664006 logs.go:123] Gathering logs for kubelet ...
	I1212 01:26:29.359857  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:26:29.429769  664006 logs.go:123] Gathering logs for dmesg ...
	I1212 01:26:29.429803  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:26:29.446191  664006 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:26:29.446222  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:26:29.544487  664006 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:26:32.046198  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:26:32.056446  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:26:32.056514  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:26:32.082944  664006 cri.go:89] found id: ""
	I1212 01:26:32.082971  664006 logs.go:282] 0 containers: []
	W1212 01:26:32.082979  664006 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:26:32.082986  664006 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:26:32.083048  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:26:32.108214  664006 cri.go:89] found id: ""
	I1212 01:26:32.108237  664006 logs.go:282] 0 containers: []
	W1212 01:26:32.108246  664006 logs.go:284] No container was found matching "etcd"
	I1212 01:26:32.108252  664006 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:26:32.108324  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:26:32.133888  664006 cri.go:89] found id: ""
	I1212 01:26:32.133912  664006 logs.go:282] 0 containers: []
	W1212 01:26:32.133921  664006 logs.go:284] No container was found matching "coredns"
	I1212 01:26:32.133926  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:26:32.133986  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:26:32.159725  664006 cri.go:89] found id: ""
	I1212 01:26:32.159746  664006 logs.go:282] 0 containers: []
	W1212 01:26:32.159755  664006 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:26:32.159767  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:26:32.159824  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:26:32.186804  664006 cri.go:89] found id: ""
	I1212 01:26:32.186825  664006 logs.go:282] 0 containers: []
	W1212 01:26:32.186833  664006 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:26:32.186840  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:26:32.186899  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:26:32.213769  664006 cri.go:89] found id: ""
	I1212 01:26:32.213802  664006 logs.go:282] 0 containers: []
	W1212 01:26:32.213811  664006 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:26:32.213818  664006 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:26:32.213876  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:26:32.238569  664006 cri.go:89] found id: ""
	I1212 01:26:32.238597  664006 logs.go:282] 0 containers: []
	W1212 01:26:32.238606  664006 logs.go:284] No container was found matching "kindnet"
	I1212 01:26:32.238611  664006 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:26:32.238714  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:26:32.264007  664006 cri.go:89] found id: ""
	I1212 01:26:32.264028  664006 logs.go:282] 0 containers: []
	W1212 01:26:32.264036  664006 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:26:32.264045  664006 logs.go:123] Gathering logs for dmesg ...
	I1212 01:26:32.264056  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:26:32.280115  664006 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:26:32.280143  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:26:32.349844  664006 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:26:32.349864  664006 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:26:32.349877  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:26:32.381086  664006 logs.go:123] Gathering logs for container status ...
	I1212 01:26:32.381121  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:26:32.409178  664006 logs.go:123] Gathering logs for kubelet ...
	I1212 01:26:32.409209  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:26:34.981025  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:26:34.991289  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:26:34.991363  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:26:35.019773  664006 cri.go:89] found id: ""
	I1212 01:26:35.019801  664006 logs.go:282] 0 containers: []
	W1212 01:26:35.019810  664006 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:26:35.019823  664006 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:26:35.019889  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:26:35.046210  664006 cri.go:89] found id: ""
	I1212 01:26:35.046235  664006 logs.go:282] 0 containers: []
	W1212 01:26:35.046243  664006 logs.go:284] No container was found matching "etcd"
	I1212 01:26:35.046249  664006 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:26:35.046314  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:26:35.072567  664006 cri.go:89] found id: ""
	I1212 01:26:35.072592  664006 logs.go:282] 0 containers: []
	W1212 01:26:35.072600  664006 logs.go:284] No container was found matching "coredns"
	I1212 01:26:35.072606  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:26:35.072664  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:26:35.098504  664006 cri.go:89] found id: ""
	I1212 01:26:35.098528  664006 logs.go:282] 0 containers: []
	W1212 01:26:35.098537  664006 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:26:35.098544  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:26:35.098603  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:26:35.123796  664006 cri.go:89] found id: ""
	I1212 01:26:35.123819  664006 logs.go:282] 0 containers: []
	W1212 01:26:35.123827  664006 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:26:35.123833  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:26:35.123896  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:26:35.153338  664006 cri.go:89] found id: ""
	I1212 01:26:35.153361  664006 logs.go:282] 0 containers: []
	W1212 01:26:35.153369  664006 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:26:35.153375  664006 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:26:35.153447  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:26:35.180203  664006 cri.go:89] found id: ""
	I1212 01:26:35.180228  664006 logs.go:282] 0 containers: []
	W1212 01:26:35.180237  664006 logs.go:284] No container was found matching "kindnet"
	I1212 01:26:35.180243  664006 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:26:35.180311  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:26:35.206913  664006 cri.go:89] found id: ""
	I1212 01:26:35.206938  664006 logs.go:282] 0 containers: []
	W1212 01:26:35.206948  664006 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:26:35.206957  664006 logs.go:123] Gathering logs for kubelet ...
	I1212 01:26:35.206969  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:26:35.275807  664006 logs.go:123] Gathering logs for dmesg ...
	I1212 01:26:35.275843  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:26:35.292174  664006 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:26:35.292204  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:26:35.357796  664006 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:26:35.357819  664006 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:26:35.357831  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:26:35.388700  664006 logs.go:123] Gathering logs for container status ...
	I1212 01:26:35.388730  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:26:37.917707  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:26:37.929462  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:26:37.929528  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:26:37.955759  664006 cri.go:89] found id: ""
	I1212 01:26:37.955784  664006 logs.go:282] 0 containers: []
	W1212 01:26:37.955793  664006 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:26:37.955799  664006 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:26:37.955857  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:26:37.981466  664006 cri.go:89] found id: ""
	I1212 01:26:37.981487  664006 logs.go:282] 0 containers: []
	W1212 01:26:37.981496  664006 logs.go:284] No container was found matching "etcd"
	I1212 01:26:37.981501  664006 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:26:37.981562  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:26:38.013819  664006 cri.go:89] found id: ""
	I1212 01:26:38.013852  664006 logs.go:282] 0 containers: []
	W1212 01:26:38.013861  664006 logs.go:284] No container was found matching "coredns"
	I1212 01:26:38.013872  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:26:38.013942  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:26:38.052594  664006 cri.go:89] found id: ""
	I1212 01:26:38.052620  664006 logs.go:282] 0 containers: []
	W1212 01:26:38.052629  664006 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:26:38.052635  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:26:38.052696  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:26:38.083283  664006 cri.go:89] found id: ""
	I1212 01:26:38.083311  664006 logs.go:282] 0 containers: []
	W1212 01:26:38.083320  664006 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:26:38.083327  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:26:38.083395  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:26:38.110717  664006 cri.go:89] found id: ""
	I1212 01:26:38.110740  664006 logs.go:282] 0 containers: []
	W1212 01:26:38.110785  664006 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:26:38.110800  664006 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:26:38.110863  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:26:38.137872  664006 cri.go:89] found id: ""
	I1212 01:26:38.137893  664006 logs.go:282] 0 containers: []
	W1212 01:26:38.137901  664006 logs.go:284] No container was found matching "kindnet"
	I1212 01:26:38.137907  664006 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:26:38.137967  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:26:38.163367  664006 cri.go:89] found id: ""
	I1212 01:26:38.163394  664006 logs.go:282] 0 containers: []
	W1212 01:26:38.163403  664006 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:26:38.163417  664006 logs.go:123] Gathering logs for kubelet ...
	I1212 01:26:38.163428  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:26:38.235981  664006 logs.go:123] Gathering logs for dmesg ...
	I1212 01:26:38.236019  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:26:38.253008  664006 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:26:38.253084  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:26:38.321753  664006 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:26:38.321821  664006 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:26:38.321847  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:26:38.354171  664006 logs.go:123] Gathering logs for container status ...
	I1212 01:26:38.354205  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:26:40.882838  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:26:40.893753  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:26:40.893825  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:26:40.923465  664006 cri.go:89] found id: ""
	I1212 01:26:40.923488  664006 logs.go:282] 0 containers: []
	W1212 01:26:40.923497  664006 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:26:40.923502  664006 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:26:40.923559  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:26:40.957057  664006 cri.go:89] found id: ""
	I1212 01:26:40.957079  664006 logs.go:282] 0 containers: []
	W1212 01:26:40.957089  664006 logs.go:284] No container was found matching "etcd"
	I1212 01:26:40.957096  664006 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:26:40.957159  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:26:40.988869  664006 cri.go:89] found id: ""
	I1212 01:26:40.988891  664006 logs.go:282] 0 containers: []
	W1212 01:26:40.988899  664006 logs.go:284] No container was found matching "coredns"
	I1212 01:26:40.988904  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:26:40.988960  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:26:41.017942  664006 cri.go:89] found id: ""
	I1212 01:26:41.017964  664006 logs.go:282] 0 containers: []
	W1212 01:26:41.017973  664006 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:26:41.017979  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:26:41.018039  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:26:41.052782  664006 cri.go:89] found id: ""
	I1212 01:26:41.052810  664006 logs.go:282] 0 containers: []
	W1212 01:26:41.052818  664006 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:26:41.052823  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:26:41.052882  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:26:41.086823  664006 cri.go:89] found id: ""
	I1212 01:26:41.086845  664006 logs.go:282] 0 containers: []
	W1212 01:26:41.086854  664006 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:26:41.086862  664006 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:26:41.086919  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:26:41.117048  664006 cri.go:89] found id: ""
	I1212 01:26:41.117071  664006 logs.go:282] 0 containers: []
	W1212 01:26:41.117080  664006 logs.go:284] No container was found matching "kindnet"
	I1212 01:26:41.117086  664006 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:26:41.117144  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:26:41.156885  664006 cri.go:89] found id: ""
	I1212 01:26:41.156905  664006 logs.go:282] 0 containers: []
	W1212 01:26:41.156914  664006 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:26:41.156922  664006 logs.go:123] Gathering logs for dmesg ...
	I1212 01:26:41.156934  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:26:41.184102  664006 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:26:41.184154  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:26:41.290635  664006 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:26:41.290657  664006 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:26:41.290670  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:26:41.332826  664006 logs.go:123] Gathering logs for container status ...
	I1212 01:26:41.332864  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:26:41.381145  664006 logs.go:123] Gathering logs for kubelet ...
	I1212 01:26:41.381173  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:26:43.963687  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:26:43.974057  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:26:43.974127  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:26:44.015195  664006 cri.go:89] found id: ""
	I1212 01:26:44.015217  664006 logs.go:282] 0 containers: []
	W1212 01:26:44.015226  664006 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:26:44.015233  664006 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:26:44.015294  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:26:44.051196  664006 cri.go:89] found id: ""
	I1212 01:26:44.051221  664006 logs.go:282] 0 containers: []
	W1212 01:26:44.051230  664006 logs.go:284] No container was found matching "etcd"
	I1212 01:26:44.051236  664006 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:26:44.051295  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:26:44.082484  664006 cri.go:89] found id: ""
	I1212 01:26:44.082511  664006 logs.go:282] 0 containers: []
	W1212 01:26:44.082522  664006 logs.go:284] No container was found matching "coredns"
	I1212 01:26:44.082528  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:26:44.082588  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:26:44.117296  664006 cri.go:89] found id: ""
	I1212 01:26:44.117325  664006 logs.go:282] 0 containers: []
	W1212 01:26:44.117347  664006 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:26:44.117360  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:26:44.117428  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:26:44.162740  664006 cri.go:89] found id: ""
	I1212 01:26:44.162766  664006 logs.go:282] 0 containers: []
	W1212 01:26:44.162775  664006 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:26:44.162782  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:26:44.162842  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:26:44.200652  664006 cri.go:89] found id: ""
	I1212 01:26:44.200687  664006 logs.go:282] 0 containers: []
	W1212 01:26:44.200696  664006 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:26:44.200703  664006 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:26:44.200762  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:26:44.235959  664006 cri.go:89] found id: ""
	I1212 01:26:44.235985  664006 logs.go:282] 0 containers: []
	W1212 01:26:44.235994  664006 logs.go:284] No container was found matching "kindnet"
	I1212 01:26:44.236000  664006 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:26:44.236063  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:26:44.268638  664006 cri.go:89] found id: ""
	I1212 01:26:44.268674  664006 logs.go:282] 0 containers: []
	W1212 01:26:44.268683  664006 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:26:44.268693  664006 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:26:44.268705  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:26:44.306345  664006 logs.go:123] Gathering logs for container status ...
	I1212 01:26:44.306380  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:26:44.355677  664006 logs.go:123] Gathering logs for kubelet ...
	I1212 01:26:44.355706  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:26:44.441228  664006 logs.go:123] Gathering logs for dmesg ...
	I1212 01:26:44.441268  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:26:44.460124  664006 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:26:44.460150  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:26:44.595379  664006 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:26:47.097059  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:26:47.107535  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:26:47.107605  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:26:47.134392  664006 cri.go:89] found id: ""
	I1212 01:26:47.134415  664006 logs.go:282] 0 containers: []
	W1212 01:26:47.134423  664006 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:26:47.134429  664006 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:26:47.134488  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:26:47.162264  664006 cri.go:89] found id: ""
	I1212 01:26:47.162289  664006 logs.go:282] 0 containers: []
	W1212 01:26:47.162298  664006 logs.go:284] No container was found matching "etcd"
	I1212 01:26:47.162305  664006 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:26:47.162367  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:26:47.194596  664006 cri.go:89] found id: ""
	I1212 01:26:47.194619  664006 logs.go:282] 0 containers: []
	W1212 01:26:47.194628  664006 logs.go:284] No container was found matching "coredns"
	I1212 01:26:47.194634  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:26:47.194713  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:26:47.220991  664006 cri.go:89] found id: ""
	I1212 01:26:47.221017  664006 logs.go:282] 0 containers: []
	W1212 01:26:47.221026  664006 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:26:47.221032  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:26:47.221092  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:26:47.247628  664006 cri.go:89] found id: ""
	I1212 01:26:47.247653  664006 logs.go:282] 0 containers: []
	W1212 01:26:47.247662  664006 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:26:47.247668  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:26:47.247728  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:26:47.273149  664006 cri.go:89] found id: ""
	I1212 01:26:47.273174  664006 logs.go:282] 0 containers: []
	W1212 01:26:47.273183  664006 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:26:47.273189  664006 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:26:47.273257  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:26:47.303269  664006 cri.go:89] found id: ""
	I1212 01:26:47.303293  664006 logs.go:282] 0 containers: []
	W1212 01:26:47.303310  664006 logs.go:284] No container was found matching "kindnet"
	I1212 01:26:47.303316  664006 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:26:47.303391  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:26:47.330832  664006 cri.go:89] found id: ""
	I1212 01:26:47.330855  664006 logs.go:282] 0 containers: []
	W1212 01:26:47.330864  664006 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:26:47.330873  664006 logs.go:123] Gathering logs for kubelet ...
	I1212 01:26:47.330885  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:26:47.400779  664006 logs.go:123] Gathering logs for dmesg ...
	I1212 01:26:47.400818  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:26:47.416796  664006 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:26:47.416878  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:26:47.491275  664006 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:26:47.491338  664006 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:26:47.491374  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:26:47.526314  664006 logs.go:123] Gathering logs for container status ...
	I1212 01:26:47.526346  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:26:50.065362  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:26:50.075948  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:26:50.076020  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:26:50.103487  664006 cri.go:89] found id: ""
	I1212 01:26:50.103516  664006 logs.go:282] 0 containers: []
	W1212 01:26:50.103526  664006 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:26:50.103533  664006 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:26:50.103599  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:26:50.130923  664006 cri.go:89] found id: ""
	I1212 01:26:50.130948  664006 logs.go:282] 0 containers: []
	W1212 01:26:50.130957  664006 logs.go:284] No container was found matching "etcd"
	I1212 01:26:50.130963  664006 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:26:50.131024  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:26:50.161919  664006 cri.go:89] found id: ""
	I1212 01:26:50.161942  664006 logs.go:282] 0 containers: []
	W1212 01:26:50.161951  664006 logs.go:284] No container was found matching "coredns"
	I1212 01:26:50.161957  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:26:50.162020  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:26:50.188130  664006 cri.go:89] found id: ""
	I1212 01:26:50.188155  664006 logs.go:282] 0 containers: []
	W1212 01:26:50.188163  664006 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:26:50.188170  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:26:50.188231  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:26:50.216932  664006 cri.go:89] found id: ""
	I1212 01:26:50.216956  664006 logs.go:282] 0 containers: []
	W1212 01:26:50.216965  664006 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:26:50.216971  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:26:50.217030  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:26:50.242868  664006 cri.go:89] found id: ""
	I1212 01:26:50.242893  664006 logs.go:282] 0 containers: []
	W1212 01:26:50.242902  664006 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:26:50.242910  664006 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:26:50.242970  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:26:50.268365  664006 cri.go:89] found id: ""
	I1212 01:26:50.268392  664006 logs.go:282] 0 containers: []
	W1212 01:26:50.268401  664006 logs.go:284] No container was found matching "kindnet"
	I1212 01:26:50.268408  664006 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:26:50.268474  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:26:50.294103  664006 cri.go:89] found id: ""
	I1212 01:26:50.294125  664006 logs.go:282] 0 containers: []
	W1212 01:26:50.294133  664006 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:26:50.294142  664006 logs.go:123] Gathering logs for kubelet ...
	I1212 01:26:50.294154  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:26:50.361347  664006 logs.go:123] Gathering logs for dmesg ...
	I1212 01:26:50.361384  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:26:50.378080  664006 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:26:50.378109  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:26:50.442832  664006 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:26:50.442851  664006 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:26:50.442863  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:26:50.478300  664006 logs.go:123] Gathering logs for container status ...
	I1212 01:26:50.478343  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:26:53.013838  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:26:53.024263  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:26:53.024404  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:26:53.050657  664006 cri.go:89] found id: ""
	I1212 01:26:53.050702  664006 logs.go:282] 0 containers: []
	W1212 01:26:53.050711  664006 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:26:53.050718  664006 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:26:53.050783  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:26:53.081202  664006 cri.go:89] found id: ""
	I1212 01:26:53.081225  664006 logs.go:282] 0 containers: []
	W1212 01:26:53.081234  664006 logs.go:284] No container was found matching "etcd"
	I1212 01:26:53.081241  664006 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:26:53.081308  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:26:53.111286  664006 cri.go:89] found id: ""
	I1212 01:26:53.111309  664006 logs.go:282] 0 containers: []
	W1212 01:26:53.111318  664006 logs.go:284] No container was found matching "coredns"
	I1212 01:26:53.111324  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:26:53.111386  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:26:53.136469  664006 cri.go:89] found id: ""
	I1212 01:26:53.136491  664006 logs.go:282] 0 containers: []
	W1212 01:26:53.136500  664006 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:26:53.136506  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:26:53.136607  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:26:53.161574  664006 cri.go:89] found id: ""
	I1212 01:26:53.161599  664006 logs.go:282] 0 containers: []
	W1212 01:26:53.161608  664006 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:26:53.161614  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:26:53.161676  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:26:53.187656  664006 cri.go:89] found id: ""
	I1212 01:26:53.187679  664006 logs.go:282] 0 containers: []
	W1212 01:26:53.187688  664006 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:26:53.187694  664006 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:26:53.187754  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:26:53.214437  664006 cri.go:89] found id: ""
	I1212 01:26:53.214460  664006 logs.go:282] 0 containers: []
	W1212 01:26:53.214469  664006 logs.go:284] No container was found matching "kindnet"
	I1212 01:26:53.214475  664006 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:26:53.214535  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:26:53.240609  664006 cri.go:89] found id: ""
	I1212 01:26:53.240632  664006 logs.go:282] 0 containers: []
	W1212 01:26:53.240641  664006 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:26:53.240650  664006 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:26:53.240661  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:26:53.271913  664006 logs.go:123] Gathering logs for container status ...
	I1212 01:26:53.271944  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:26:53.303811  664006 logs.go:123] Gathering logs for kubelet ...
	I1212 01:26:53.303837  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:26:53.371616  664006 logs.go:123] Gathering logs for dmesg ...
	I1212 01:26:53.371654  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:26:53.387679  664006 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:26:53.387708  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:26:53.453534  664006 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:26:55.955214  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:26:55.965135  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:26:55.965203  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:26:55.990637  664006 cri.go:89] found id: ""
	I1212 01:26:55.990659  664006 logs.go:282] 0 containers: []
	W1212 01:26:55.990666  664006 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:26:55.990672  664006 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:26:55.990761  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:26:56.017420  664006 cri.go:89] found id: ""
	I1212 01:26:56.017447  664006 logs.go:282] 0 containers: []
	W1212 01:26:56.017456  664006 logs.go:284] No container was found matching "etcd"
	I1212 01:26:56.017463  664006 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:26:56.017524  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:26:56.042812  664006 cri.go:89] found id: ""
	I1212 01:26:56.042835  664006 logs.go:282] 0 containers: []
	W1212 01:26:56.042845  664006 logs.go:284] No container was found matching "coredns"
	I1212 01:26:56.042852  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:26:56.042915  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:26:56.074261  664006 cri.go:89] found id: ""
	I1212 01:26:56.074286  664006 logs.go:282] 0 containers: []
	W1212 01:26:56.074295  664006 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:26:56.074302  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:26:56.074361  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:26:56.101182  664006 cri.go:89] found id: ""
	I1212 01:26:56.101208  664006 logs.go:282] 0 containers: []
	W1212 01:26:56.101218  664006 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:26:56.101224  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:26:56.101304  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:26:56.126371  664006 cri.go:89] found id: ""
	I1212 01:26:56.126396  664006 logs.go:282] 0 containers: []
	W1212 01:26:56.126404  664006 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:26:56.126411  664006 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:26:56.126468  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:26:56.154306  664006 cri.go:89] found id: ""
	I1212 01:26:56.154331  664006 logs.go:282] 0 containers: []
	W1212 01:26:56.154340  664006 logs.go:284] No container was found matching "kindnet"
	I1212 01:26:56.154346  664006 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:26:56.154404  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:26:56.181389  664006 cri.go:89] found id: ""
	I1212 01:26:56.181421  664006 logs.go:282] 0 containers: []
	W1212 01:26:56.181430  664006 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:26:56.181439  664006 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:26:56.181451  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:26:56.212740  664006 logs.go:123] Gathering logs for container status ...
	I1212 01:26:56.212775  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:26:56.243285  664006 logs.go:123] Gathering logs for kubelet ...
	I1212 01:26:56.243313  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:26:56.311456  664006 logs.go:123] Gathering logs for dmesg ...
	I1212 01:26:56.311494  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:26:56.327433  664006 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:26:56.327502  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:26:56.396739  664006 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:26:58.897025  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:26:58.906860  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:26:58.906927  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:26:58.937319  664006 cri.go:89] found id: ""
	I1212 01:26:58.937342  664006 logs.go:282] 0 containers: []
	W1212 01:26:58.937350  664006 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:26:58.937357  664006 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:26:58.937416  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:26:58.962716  664006 cri.go:89] found id: ""
	I1212 01:26:58.962743  664006 logs.go:282] 0 containers: []
	W1212 01:26:58.962752  664006 logs.go:284] No container was found matching "etcd"
	I1212 01:26:58.962764  664006 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:26:58.962825  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:26:58.988329  664006 cri.go:89] found id: ""
	I1212 01:26:58.988353  664006 logs.go:282] 0 containers: []
	W1212 01:26:58.988362  664006 logs.go:284] No container was found matching "coredns"
	I1212 01:26:58.988368  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:26:58.988430  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:26:59.015652  664006 cri.go:89] found id: ""
	I1212 01:26:59.015676  664006 logs.go:282] 0 containers: []
	W1212 01:26:59.015685  664006 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:26:59.015692  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:26:59.015753  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:26:59.045872  664006 cri.go:89] found id: ""
	I1212 01:26:59.045896  664006 logs.go:282] 0 containers: []
	W1212 01:26:59.045904  664006 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:26:59.045910  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:26:59.045970  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:26:59.070852  664006 cri.go:89] found id: ""
	I1212 01:26:59.070878  664006 logs.go:282] 0 containers: []
	W1212 01:26:59.070887  664006 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:26:59.070893  664006 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:26:59.070953  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:26:59.095950  664006 cri.go:89] found id: ""
	I1212 01:26:59.095978  664006 logs.go:282] 0 containers: []
	W1212 01:26:59.095986  664006 logs.go:284] No container was found matching "kindnet"
	I1212 01:26:59.095992  664006 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:26:59.096048  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:26:59.120911  664006 cri.go:89] found id: ""
	I1212 01:26:59.120935  664006 logs.go:282] 0 containers: []
	W1212 01:26:59.120943  664006 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:26:59.120952  664006 logs.go:123] Gathering logs for dmesg ...
	I1212 01:26:59.120964  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:26:59.136556  664006 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:26:59.136583  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:26:59.208271  664006 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:26:59.208293  664006 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:26:59.208305  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:26:59.239427  664006 logs.go:123] Gathering logs for container status ...
	I1212 01:26:59.239461  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:26:59.266944  664006 logs.go:123] Gathering logs for kubelet ...
	I1212 01:26:59.266968  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:27:01.834997  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:27:01.845006  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:27:01.845072  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:27:01.877455  664006 cri.go:89] found id: ""
	I1212 01:27:01.877476  664006 logs.go:282] 0 containers: []
	W1212 01:27:01.877484  664006 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:27:01.877490  664006 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:27:01.877549  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:27:01.907691  664006 cri.go:89] found id: ""
	I1212 01:27:01.907711  664006 logs.go:282] 0 containers: []
	W1212 01:27:01.907719  664006 logs.go:284] No container was found matching "etcd"
	I1212 01:27:01.907725  664006 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:27:01.907781  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:27:01.934127  664006 cri.go:89] found id: ""
	I1212 01:27:01.934148  664006 logs.go:282] 0 containers: []
	W1212 01:27:01.934157  664006 logs.go:284] No container was found matching "coredns"
	I1212 01:27:01.934163  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:27:01.934229  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:27:01.959480  664006 cri.go:89] found id: ""
	I1212 01:27:01.959501  664006 logs.go:282] 0 containers: []
	W1212 01:27:01.959510  664006 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:27:01.959516  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:27:01.959583  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:27:01.985765  664006 cri.go:89] found id: ""
	I1212 01:27:01.985787  664006 logs.go:282] 0 containers: []
	W1212 01:27:01.985795  664006 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:27:01.985801  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:27:01.985861  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:27:02.016184  664006 cri.go:89] found id: ""
	I1212 01:27:02.016208  664006 logs.go:282] 0 containers: []
	W1212 01:27:02.016216  664006 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:27:02.016223  664006 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:27:02.016307  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:27:02.046566  664006 cri.go:89] found id: ""
	I1212 01:27:02.046590  664006 logs.go:282] 0 containers: []
	W1212 01:27:02.046599  664006 logs.go:284] No container was found matching "kindnet"
	I1212 01:27:02.046606  664006 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:27:02.046662  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:27:02.074028  664006 cri.go:89] found id: ""
	I1212 01:27:02.074057  664006 logs.go:282] 0 containers: []
	W1212 01:27:02.074066  664006 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:27:02.074075  664006 logs.go:123] Gathering logs for kubelet ...
	I1212 01:27:02.074086  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:27:02.141375  664006 logs.go:123] Gathering logs for dmesg ...
	I1212 01:27:02.141408  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:27:02.158563  664006 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:27:02.158594  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:27:02.221684  664006 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:27:02.221703  664006 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:27:02.221716  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:27:02.252812  664006 logs.go:123] Gathering logs for container status ...
	I1212 01:27:02.252845  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:27:04.780681  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:27:04.801934  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:27:04.802009  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:27:04.842019  664006 cri.go:89] found id: ""
	I1212 01:27:04.842039  664006 logs.go:282] 0 containers: []
	W1212 01:27:04.842047  664006 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:27:04.842053  664006 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:27:04.842112  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:27:04.884319  664006 cri.go:89] found id: ""
	I1212 01:27:04.884341  664006 logs.go:282] 0 containers: []
	W1212 01:27:04.884349  664006 logs.go:284] No container was found matching "etcd"
	I1212 01:27:04.884355  664006 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:27:04.884417  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:27:04.922666  664006 cri.go:89] found id: ""
	I1212 01:27:04.922718  664006 logs.go:282] 0 containers: []
	W1212 01:27:04.922726  664006 logs.go:284] No container was found matching "coredns"
	I1212 01:27:04.922733  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:27:04.922794  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:27:04.974874  664006 cri.go:89] found id: ""
	I1212 01:27:04.974897  664006 logs.go:282] 0 containers: []
	W1212 01:27:04.974906  664006 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:27:04.974912  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:27:04.974995  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:27:05.014723  664006 cri.go:89] found id: ""
	I1212 01:27:05.014747  664006 logs.go:282] 0 containers: []
	W1212 01:27:05.014756  664006 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:27:05.014762  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:27:05.014834  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:27:05.053072  664006 cri.go:89] found id: ""
	I1212 01:27:05.053094  664006 logs.go:282] 0 containers: []
	W1212 01:27:05.053103  664006 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:27:05.053109  664006 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:27:05.053168  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:27:05.082750  664006 cri.go:89] found id: ""
	I1212 01:27:05.082778  664006 logs.go:282] 0 containers: []
	W1212 01:27:05.082788  664006 logs.go:284] No container was found matching "kindnet"
	I1212 01:27:05.082801  664006 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:27:05.082875  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:27:05.125098  664006 cri.go:89] found id: ""
	I1212 01:27:05.125120  664006 logs.go:282] 0 containers: []
	W1212 01:27:05.125128  664006 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:27:05.125137  664006 logs.go:123] Gathering logs for kubelet ...
	I1212 01:27:05.125148  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:27:05.208537  664006 logs.go:123] Gathering logs for dmesg ...
	I1212 01:27:05.208574  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:27:05.227775  664006 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:27:05.227807  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:27:05.336189  664006 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:27:05.336215  664006 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:27:05.336256  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:27:05.388478  664006 logs.go:123] Gathering logs for container status ...
	I1212 01:27:05.388521  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:27:07.920359  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:27:07.932616  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:27:07.932683  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:27:07.979947  664006 cri.go:89] found id: ""
	I1212 01:27:07.979969  664006 logs.go:282] 0 containers: []
	W1212 01:27:07.979977  664006 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:27:07.979983  664006 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:27:07.980041  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:27:08.014879  664006 cri.go:89] found id: ""
	I1212 01:27:08.014902  664006 logs.go:282] 0 containers: []
	W1212 01:27:08.014910  664006 logs.go:284] No container was found matching "etcd"
	I1212 01:27:08.014916  664006 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:27:08.014979  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:27:08.063530  664006 cri.go:89] found id: ""
	I1212 01:27:08.063552  664006 logs.go:282] 0 containers: []
	W1212 01:27:08.063560  664006 logs.go:284] No container was found matching "coredns"
	I1212 01:27:08.063566  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:27:08.063629  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:27:08.098095  664006 cri.go:89] found id: ""
	I1212 01:27:08.098116  664006 logs.go:282] 0 containers: []
	W1212 01:27:08.098124  664006 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:27:08.098130  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:27:08.098189  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:27:08.134500  664006 cri.go:89] found id: ""
	I1212 01:27:08.134521  664006 logs.go:282] 0 containers: []
	W1212 01:27:08.134529  664006 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:27:08.134535  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:27:08.134594  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:27:08.169475  664006 cri.go:89] found id: ""
	I1212 01:27:08.169496  664006 logs.go:282] 0 containers: []
	W1212 01:27:08.169504  664006 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:27:08.169511  664006 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:27:08.169570  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:27:08.209434  664006 cri.go:89] found id: ""
	I1212 01:27:08.209453  664006 logs.go:282] 0 containers: []
	W1212 01:27:08.209461  664006 logs.go:284] No container was found matching "kindnet"
	I1212 01:27:08.209467  664006 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:27:08.209520  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:27:08.241006  664006 cri.go:89] found id: ""
	I1212 01:27:08.241027  664006 logs.go:282] 0 containers: []
	W1212 01:27:08.241042  664006 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:27:08.241050  664006 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:27:08.241062  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:27:08.276770  664006 logs.go:123] Gathering logs for container status ...
	I1212 01:27:08.276835  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:27:08.323914  664006 logs.go:123] Gathering logs for kubelet ...
	I1212 01:27:08.323936  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:27:08.407193  664006 logs.go:123] Gathering logs for dmesg ...
	I1212 01:27:08.407263  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:27:08.429904  664006 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:27:08.429930  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:27:08.547915  664006 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:27:11.048974  664006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:27:11.059322  664006 kubeadm.go:602] duration metric: took 4m2.039305384s to restartPrimaryControlPlane
	W1212 01:27:11.059387  664006 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1212 01:27:11.059452  664006 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1212 01:27:11.574311  664006 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 01:27:11.591384  664006 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 01:27:11.602108  664006 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 01:27:11.602172  664006 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 01:27:11.616729  664006 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 01:27:11.616795  664006 kubeadm.go:158] found existing configuration files:
	
	I1212 01:27:11.616872  664006 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 01:27:11.625654  664006 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 01:27:11.625717  664006 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 01:27:11.633384  664006 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 01:27:11.641199  664006 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 01:27:11.641268  664006 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 01:27:11.649804  664006 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 01:27:11.658272  664006 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 01:27:11.658420  664006 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 01:27:11.666358  664006 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 01:27:11.674843  664006 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 01:27:11.674943  664006 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 01:27:11.683737  664006 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 01:27:11.746617  664006 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 01:27:11.746998  664006 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 01:27:11.858877  664006 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 01:27:11.858999  664006 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1212 01:27:11.859093  664006 kubeadm.go:319] OS: Linux
	I1212 01:27:11.859168  664006 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 01:27:11.859218  664006 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 01:27:11.859312  664006 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 01:27:11.859395  664006 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 01:27:11.859448  664006 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 01:27:11.859529  664006 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 01:27:11.859594  664006 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 01:27:11.859657  664006 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 01:27:11.859721  664006 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 01:27:11.960342  664006 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 01:27:11.960451  664006 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 01:27:11.960549  664006 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 01:27:11.986606  664006 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 01:27:11.992303  664006 out.go:252]   - Generating certificates and keys ...
	I1212 01:27:11.992392  664006 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 01:27:11.992457  664006 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 01:27:11.992534  664006 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 01:27:11.992594  664006 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1212 01:27:11.992664  664006 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 01:27:11.993148  664006 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1212 01:27:11.994116  664006 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1212 01:27:11.995372  664006 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1212 01:27:11.995854  664006 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 01:27:11.996992  664006 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 01:27:11.998662  664006 kubeadm.go:319] [certs] Using the existing "sa" key
	I1212 01:27:11.998975  664006 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 01:27:12.294387  664006 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 01:27:12.971864  664006 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 01:27:13.088212  664006 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 01:27:13.195887  664006 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 01:27:13.480669  664006 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 01:27:13.482417  664006 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 01:27:13.485139  664006 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 01:27:13.488318  664006 out.go:252]   - Booting up control plane ...
	I1212 01:27:13.488421  664006 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 01:27:13.488503  664006 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 01:27:13.490421  664006 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 01:27:13.505905  664006 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 01:27:13.506261  664006 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 01:27:13.516985  664006 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 01:27:13.517280  664006 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 01:27:13.517514  664006 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 01:27:13.722549  664006 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 01:27:13.722675  664006 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 01:31:13.722850  664006 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000647701s
	I1212 01:31:13.722884  664006 kubeadm.go:319] 
	I1212 01:31:13.722942  664006 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 01:31:13.722979  664006 kubeadm.go:319] 	- The kubelet is not running
	I1212 01:31:13.723093  664006 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 01:31:13.723102  664006 kubeadm.go:319] 
	I1212 01:31:13.723206  664006 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 01:31:13.723242  664006 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 01:31:13.723277  664006 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 01:31:13.723285  664006 kubeadm.go:319] 
	I1212 01:31:13.727064  664006 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1212 01:31:13.727491  664006 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1212 01:31:13.727604  664006 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 01:31:13.727842  664006 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1212 01:31:13.727852  664006 kubeadm.go:319] 
	I1212 01:31:13.727921  664006 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1212 01:31:13.728035  664006 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000647701s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000647701s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1212 01:31:13.728152  664006 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1212 01:31:14.148616  664006 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 01:31:14.161344  664006 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 01:31:14.161406  664006 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 01:31:14.169124  664006 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 01:31:14.169145  664006 kubeadm.go:158] found existing configuration files:
	
	I1212 01:31:14.169197  664006 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 01:31:14.176644  664006 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 01:31:14.176707  664006 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 01:31:14.184169  664006 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 01:31:14.192001  664006 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 01:31:14.192066  664006 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 01:31:14.199503  664006 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 01:31:14.207167  664006 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 01:31:14.207235  664006 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 01:31:14.214427  664006 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 01:31:14.222656  664006 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 01:31:14.222737  664006 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 01:31:14.230501  664006 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 01:31:14.270952  664006 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 01:31:14.271050  664006 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 01:31:14.345535  664006 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 01:31:14.345673  664006 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1212 01:31:14.345729  664006 kubeadm.go:319] OS: Linux
	I1212 01:31:14.345796  664006 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 01:31:14.345861  664006 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 01:31:14.345929  664006 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 01:31:14.345994  664006 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 01:31:14.346090  664006 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 01:31:14.346169  664006 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 01:31:14.346246  664006 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 01:31:14.346319  664006 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 01:31:14.346392  664006 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 01:31:14.414588  664006 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 01:31:14.414798  664006 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 01:31:14.414950  664006 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 01:31:14.421644  664006 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 01:31:14.427151  664006 out.go:252]   - Generating certificates and keys ...
	I1212 01:31:14.427308  664006 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 01:31:14.427409  664006 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 01:31:14.427530  664006 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 01:31:14.427625  664006 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1212 01:31:14.427736  664006 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 01:31:14.427823  664006 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1212 01:31:14.427931  664006 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1212 01:31:14.428029  664006 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1212 01:31:14.428144  664006 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 01:31:14.428255  664006 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 01:31:14.428326  664006 kubeadm.go:319] [certs] Using the existing "sa" key
	I1212 01:31:14.428418  664006 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 01:31:14.660554  664006 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 01:31:15.201613  664006 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 01:31:15.727410  664006 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 01:31:15.977388  664006 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 01:31:16.486462  664006 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 01:31:16.487136  664006 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 01:31:16.492515  664006 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 01:31:16.495698  664006 out.go:252]   - Booting up control plane ...
	I1212 01:31:16.495810  664006 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 01:31:16.495897  664006 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 01:31:16.496566  664006 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 01:31:16.511898  664006 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 01:31:16.512008  664006 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 01:31:16.519413  664006 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 01:31:16.519746  664006 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 01:31:16.519968  664006 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 01:31:16.654059  664006 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 01:31:16.654178  664006 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 01:35:16.655382  664006 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001443347s
	I1212 01:35:16.655414  664006 kubeadm.go:319] 
	I1212 01:35:16.655468  664006 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 01:35:16.655499  664006 kubeadm.go:319] 	- The kubelet is not running
	I1212 01:35:16.655598  664006 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 01:35:16.655603  664006 kubeadm.go:319] 
	I1212 01:35:16.655701  664006 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 01:35:16.655731  664006 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 01:35:16.655760  664006 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 01:35:16.655764  664006 kubeadm.go:319] 
	I1212 01:35:16.660115  664006 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1212 01:35:16.660546  664006 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1212 01:35:16.660662  664006 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 01:35:16.660928  664006 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1212 01:35:16.660940  664006 kubeadm.go:319] 
	I1212 01:35:16.661010  664006 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1212 01:35:16.661072  664006 kubeadm.go:403] duration metric: took 12m7.68470731s to StartCluster
	I1212 01:35:16.661109  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:35:16.661169  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:35:16.692829  664006 cri.go:89] found id: ""
	I1212 01:35:16.692861  664006 logs.go:282] 0 containers: []
	W1212 01:35:16.692870  664006 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:35:16.692876  664006 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:35:16.692941  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:35:16.732997  664006 cri.go:89] found id: ""
	I1212 01:35:16.733021  664006 logs.go:282] 0 containers: []
	W1212 01:35:16.733030  664006 logs.go:284] No container was found matching "etcd"
	I1212 01:35:16.733035  664006 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:35:16.733095  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:35:16.785196  664006 cri.go:89] found id: ""
	I1212 01:35:16.785218  664006 logs.go:282] 0 containers: []
	W1212 01:35:16.785227  664006 logs.go:284] No container was found matching "coredns"
	I1212 01:35:16.785233  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:35:16.785290  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:35:16.825764  664006 cri.go:89] found id: ""
	I1212 01:35:16.825785  664006 logs.go:282] 0 containers: []
	W1212 01:35:16.825793  664006 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:35:16.825800  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:35:16.825856  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:35:16.856866  664006 cri.go:89] found id: ""
	I1212 01:35:16.856888  664006 logs.go:282] 0 containers: []
	W1212 01:35:16.856896  664006 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:35:16.856902  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:35:16.856961  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:35:16.886621  664006 cri.go:89] found id: ""
	I1212 01:35:16.886644  664006 logs.go:282] 0 containers: []
	W1212 01:35:16.886652  664006 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:35:16.886659  664006 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:35:16.886732  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:35:16.913954  664006 cri.go:89] found id: ""
	I1212 01:35:16.913980  664006 logs.go:282] 0 containers: []
	W1212 01:35:16.913989  664006 logs.go:284] No container was found matching "kindnet"
	I1212 01:35:16.913996  664006 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:35:16.914052  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:35:16.951984  664006 cri.go:89] found id: ""
	I1212 01:35:16.952015  664006 logs.go:282] 0 containers: []
	W1212 01:35:16.952023  664006 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:35:16.952032  664006 logs.go:123] Gathering logs for kubelet ...
	I1212 01:35:16.952044  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:35:17.030761  664006 logs.go:123] Gathering logs for dmesg ...
	I1212 01:35:17.030802  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:35:17.049006  664006 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:35:17.049033  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:35:17.145593  664006 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:35:17.145617  664006 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:35:17.145629  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:35:17.188091  664006 logs.go:123] Gathering logs for container status ...
	I1212 01:35:17.188169  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 01:35:17.226761  664006 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001443347s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1212 01:35:17.226810  664006 out.go:285] * 
	* 
	W1212 01:35:17.226863  664006 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001443347s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001443347s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 01:35:17.226880  664006 out.go:285] * 
	* 
	W1212 01:35:17.229004  664006 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 01:35:17.236108  664006 out.go:203] 
	W1212 01:35:17.238117  664006 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001443347s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001443347s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 01:35:17.238179  664006 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1212 01:35:17.238203  664006 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1212 01:35:17.241596  664006 out.go:203] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-linux-arm64 start -p kubernetes-upgrade-224473 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio : exit status 109
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-224473 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-224473 version --output=json: exit status 1 (165.824564ms)

                                                
                                                
-- stdout --
	{
	  "clientVersion": {
	    "major": "1",
	    "minor": "33",
	    "gitVersion": "v1.33.2",
	    "gitCommit": "a57b6f7709f6c2722b92f07b8b4c48210a51fc40",
	    "gitTreeState": "clean",
	    "buildDate": "2025-06-17T18:41:31Z",
	    "goVersion": "go1.24.4",
	    "compiler": "gc",
	    "platform": "linux/arm64"
	  },
	  "kustomizeVersion": "v5.6.0"
	}

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.85.2:8443 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:615: *** TestKubernetesUpgrade FAILED at 2025-12-12 01:35:18.312594539 +0000 UTC m=+5073.821932543
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestKubernetesUpgrade]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect kubernetes-upgrade-224473
helpers_test.go:244: (dbg) docker inspect kubernetes-upgrade-224473:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "cb725c648f7d995c1cb5c6e9e3dcfabd9abd50337969a1d6bbdf25346015b783",
	        "Created": "2025-12-12T01:22:27.821826623Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 664130,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T01:22:57.671929107Z",
	            "FinishedAt": "2025-12-12T01:22:56.62892862Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/cb725c648f7d995c1cb5c6e9e3dcfabd9abd50337969a1d6bbdf25346015b783/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/cb725c648f7d995c1cb5c6e9e3dcfabd9abd50337969a1d6bbdf25346015b783/hostname",
	        "HostsPath": "/var/lib/docker/containers/cb725c648f7d995c1cb5c6e9e3dcfabd9abd50337969a1d6bbdf25346015b783/hosts",
	        "LogPath": "/var/lib/docker/containers/cb725c648f7d995c1cb5c6e9e3dcfabd9abd50337969a1d6bbdf25346015b783/cb725c648f7d995c1cb5c6e9e3dcfabd9abd50337969a1d6bbdf25346015b783-json.log",
	        "Name": "/kubernetes-upgrade-224473",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-224473:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-224473",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "cb725c648f7d995c1cb5c6e9e3dcfabd9abd50337969a1d6bbdf25346015b783",
	                "LowerDir": "/var/lib/docker/overlay2/cace0c96dac387a7d1e9e269681a91abd42ae9f4ef3d0abeea856d8ce73c75b9-init/diff:/var/lib/docker/overlay2/312acdcca8c5c90ada236fa0dd866f841348e5b8485928af37d3628cccc20197/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cace0c96dac387a7d1e9e269681a91abd42ae9f4ef3d0abeea856d8ce73c75b9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cace0c96dac387a7d1e9e269681a91abd42ae9f4ef3d0abeea856d8ce73c75b9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cace0c96dac387a7d1e9e269681a91abd42ae9f4ef3d0abeea856d8ce73c75b9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-224473",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-224473/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-224473",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-224473",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-224473",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4cc0cf178127905898f1354426cb7bdeec6a8f9da7ea469daa7c48b78de0f0cb",
	            "SandboxKey": "/var/run/docker/netns/4cc0cf178127",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33399"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33400"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33403"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33401"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33402"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-224473": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ca:4c:7f:26:89:d6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e265c78f13e163f01ca03016e808bf839fdf2ece6a395ef70fd325a8d653e968",
	                    "EndpointID": "dfd14c1e1c530c39e9a5df1a7014ec85a031307e2fc05a98b34bc3611307f504",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "kubernetes-upgrade-224473",
	                        "cb725c648f7d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p kubernetes-upgrade-224473 -n kubernetes-upgrade-224473
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p kubernetes-upgrade-224473 -n kubernetes-upgrade-224473: exit status 2 (463.479855ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-224473 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p kubernetes-upgrade-224473 logs -n 25: (1.219393073s)
helpers_test.go:261: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                      ARGS                                                                       │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-869533 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                   │ NoKubernetes-869533       │ jenkins │ v1.37.0 │ 12 Dec 25 01:20 UTC │                     │
	│ start   │ -p NoKubernetes-869533 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                           │ NoKubernetes-869533       │ jenkins │ v1.37.0 │ 12 Dec 25 01:20 UTC │ 12 Dec 25 01:21 UTC │
	│ start   │ -p missing-upgrade-812493 --memory=3072 --driver=docker  --container-runtime=crio                                                               │ missing-upgrade-812493    │ jenkins │ v1.35.0 │ 12 Dec 25 01:20 UTC │ 12 Dec 25 01:21 UTC │
	│ start   │ -p NoKubernetes-869533 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                           │ NoKubernetes-869533       │ jenkins │ v1.37.0 │ 12 Dec 25 01:21 UTC │ 12 Dec 25 01:23 UTC │
	│ start   │ -p missing-upgrade-812493 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ missing-upgrade-812493    │ jenkins │ v1.37.0 │ 12 Dec 25 01:21 UTC │ 12 Dec 25 01:22 UTC │
	│ delete  │ -p missing-upgrade-812493                                                                                                                       │ missing-upgrade-812493    │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │ 12 Dec 25 01:22 UTC │
	│ start   │ -p kubernetes-upgrade-224473 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio        │ kubernetes-upgrade-224473 │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │ 12 Dec 25 01:22 UTC │
	│ stop    │ -p kubernetes-upgrade-224473                                                                                                                    │ kubernetes-upgrade-224473 │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │ 12 Dec 25 01:22 UTC │
	│ start   │ -p kubernetes-upgrade-224473 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-224473 │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │                     │
	│ delete  │ -p NoKubernetes-869533                                                                                                                          │ NoKubernetes-869533       │ jenkins │ v1.37.0 │ 12 Dec 25 01:23 UTC │ 12 Dec 25 01:23 UTC │
	│ start   │ -p NoKubernetes-869533 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                           │ NoKubernetes-869533       │ jenkins │ v1.37.0 │ 12 Dec 25 01:23 UTC │ 12 Dec 25 01:23 UTC │
	│ ssh     │ -p NoKubernetes-869533 sudo systemctl is-active --quiet service kubelet                                                                         │ NoKubernetes-869533       │ jenkins │ v1.37.0 │ 12 Dec 25 01:23 UTC │                     │
	│ stop    │ -p NoKubernetes-869533                                                                                                                          │ NoKubernetes-869533       │ jenkins │ v1.37.0 │ 12 Dec 25 01:23 UTC │ 12 Dec 25 01:23 UTC │
	│ start   │ -p NoKubernetes-869533 --driver=docker  --container-runtime=crio                                                                                │ NoKubernetes-869533       │ jenkins │ v1.37.0 │ 12 Dec 25 01:23 UTC │ 12 Dec 25 01:23 UTC │
	│ ssh     │ -p NoKubernetes-869533 sudo systemctl is-active --quiet service kubelet                                                                         │ NoKubernetes-869533       │ jenkins │ v1.37.0 │ 12 Dec 25 01:23 UTC │                     │
	│ delete  │ -p NoKubernetes-869533                                                                                                                          │ NoKubernetes-869533       │ jenkins │ v1.37.0 │ 12 Dec 25 01:23 UTC │ 12 Dec 25 01:23 UTC │
	│ start   │ -p stopped-upgrade-204630 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                            │ stopped-upgrade-204630    │ jenkins │ v1.35.0 │ 12 Dec 25 01:23 UTC │ 12 Dec 25 01:24 UTC │
	│ stop    │ stopped-upgrade-204630 stop                                                                                                                     │ stopped-upgrade-204630    │ jenkins │ v1.35.0 │ 12 Dec 25 01:24 UTC │ 12 Dec 25 01:24 UTC │
	│ start   │ -p stopped-upgrade-204630 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ stopped-upgrade-204630    │ jenkins │ v1.37.0 │ 12 Dec 25 01:24 UTC │ 12 Dec 25 01:28 UTC │
	│ delete  │ -p stopped-upgrade-204630                                                                                                                       │ stopped-upgrade-204630    │ jenkins │ v1.37.0 │ 12 Dec 25 01:28 UTC │ 12 Dec 25 01:28 UTC │
	│ start   │ -p running-upgrade-260319 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                            │ running-upgrade-260319    │ jenkins │ v1.35.0 │ 12 Dec 25 01:28 UTC │ 12 Dec 25 01:29 UTC │
	│ start   │ -p running-upgrade-260319 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ running-upgrade-260319    │ jenkins │ v1.37.0 │ 12 Dec 25 01:29 UTC │ 12 Dec 25 01:33 UTC │
	│ delete  │ -p running-upgrade-260319                                                                                                                       │ running-upgrade-260319    │ jenkins │ v1.37.0 │ 12 Dec 25 01:33 UTC │ 12 Dec 25 01:33 UTC │
	│ start   │ -p pause-249141 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                       │ pause-249141              │ jenkins │ v1.37.0 │ 12 Dec 25 01:33 UTC │ 12 Dec 25 01:35 UTC │
	│ start   │ -p pause-249141 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                │ pause-249141              │ jenkins │ v1.37.0 │ 12 Dec 25 01:35 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 01:35:08
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 01:35:08.432549  701970 out.go:360] Setting OutFile to fd 1 ...
	I1212 01:35:08.432739  701970 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 01:35:08.432770  701970 out.go:374] Setting ErrFile to fd 2...
	I1212 01:35:08.432791  701970 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 01:35:08.433079  701970 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-487723/.minikube/bin
	I1212 01:35:08.433480  701970 out.go:368] Setting JSON to false
	I1212 01:35:08.434505  701970 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":15454,"bootTime":1765487855,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1212 01:35:08.434600  701970 start.go:143] virtualization:  
	I1212 01:35:08.438342  701970 out.go:179] * [pause-249141] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 01:35:08.442294  701970 out.go:179]   - MINIKUBE_LOCATION=22101
	I1212 01:35:08.442830  701970 notify.go:221] Checking for updates...
	I1212 01:35:08.449217  701970 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 01:35:08.452730  701970 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22101-487723/kubeconfig
	I1212 01:35:08.455626  701970 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-487723/.minikube
	I1212 01:35:08.458560  701970 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 01:35:08.461543  701970 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 01:35:08.465043  701970 config.go:182] Loaded profile config "pause-249141": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 01:35:08.468424  701970 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 01:35:08.498355  701970 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 01:35:08.498473  701970 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 01:35:08.580420  701970 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-12 01:35:08.570408998 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 01:35:08.580521  701970 docker.go:319] overlay module found
	I1212 01:35:08.583615  701970 out.go:179] * Using the docker driver based on existing profile
	I1212 01:35:08.586464  701970 start.go:309] selected driver: docker
	I1212 01:35:08.586482  701970 start.go:927] validating driver "docker" against &{Name:pause-249141 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-249141 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 01:35:08.586613  701970 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 01:35:08.586810  701970 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 01:35:08.646382  701970 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-12 01:35:08.636768628 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 01:35:08.647041  701970 cni.go:84] Creating CNI manager for ""
	I1212 01:35:08.647099  701970 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 01:35:08.647153  701970 start.go:353] cluster config:
	{Name:pause-249141 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-249141 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 01:35:08.652231  701970 out.go:179] * Starting "pause-249141" primary control-plane node in "pause-249141" cluster
	I1212 01:35:08.655107  701970 cache.go:134] Beginning downloading kic base image for docker with crio
	I1212 01:35:08.657960  701970 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1212 01:35:08.660768  701970 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 01:35:08.660818  701970 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22101-487723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1212 01:35:08.660832  701970 cache.go:65] Caching tarball of preloaded images
	I1212 01:35:08.660863  701970 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1212 01:35:08.660918  701970 preload.go:238] Found /home/jenkins/minikube-integration/22101-487723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1212 01:35:08.660929  701970 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1212 01:35:08.661064  701970 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/pause-249141/config.json ...
	I1212 01:35:08.680387  701970 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1212 01:35:08.680410  701970 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1212 01:35:08.680425  701970 cache.go:243] Successfully downloaded all kic artifacts
	I1212 01:35:08.680452  701970 start.go:360] acquireMachinesLock for pause-249141: {Name:mk79759c1c447dc601621b15c9306e4ca53cb862 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 01:35:08.680513  701970 start.go:364] duration metric: took 34.739µs to acquireMachinesLock for "pause-249141"
	I1212 01:35:08.680542  701970 start.go:96] Skipping create...Using existing machine configuration
	I1212 01:35:08.680552  701970 fix.go:54] fixHost starting: 
	I1212 01:35:08.680809  701970 cli_runner.go:164] Run: docker container inspect pause-249141 --format={{.State.Status}}
	I1212 01:35:08.699323  701970 fix.go:112] recreateIfNeeded on pause-249141: state=Running err=<nil>
	W1212 01:35:08.699353  701970 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 01:35:08.702657  701970 out.go:252] * Updating the running docker "pause-249141" container ...
	I1212 01:35:08.702724  701970 machine.go:94] provisionDockerMachine start ...
	I1212 01:35:08.702818  701970 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-249141
	I1212 01:35:08.719376  701970 main.go:143] libmachine: Using SSH client type: native
	I1212 01:35:08.719694  701970 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33429 <nil> <nil>}
	I1212 01:35:08.719702  701970 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 01:35:08.870476  701970 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-249141
	
	I1212 01:35:08.870500  701970 ubuntu.go:182] provisioning hostname "pause-249141"
	I1212 01:35:08.870575  701970 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-249141
	I1212 01:35:08.889124  701970 main.go:143] libmachine: Using SSH client type: native
	I1212 01:35:08.889459  701970 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33429 <nil> <nil>}
	I1212 01:35:08.889477  701970 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-249141 && echo "pause-249141" | sudo tee /etc/hostname
	I1212 01:35:09.048315  701970 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-249141
	
	I1212 01:35:09.048431  701970 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-249141
	I1212 01:35:09.065764  701970 main.go:143] libmachine: Using SSH client type: native
	I1212 01:35:09.066080  701970 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33429 <nil> <nil>}
	I1212 01:35:09.066104  701970 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-249141' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-249141/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-249141' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 01:35:09.214817  701970 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 01:35:09.214843  701970 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22101-487723/.minikube CaCertPath:/home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22101-487723/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22101-487723/.minikube}
	I1212 01:35:09.214871  701970 ubuntu.go:190] setting up certificates
	I1212 01:35:09.214887  701970 provision.go:84] configureAuth start
	I1212 01:35:09.214956  701970 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-249141
	I1212 01:35:09.234276  701970 provision.go:143] copyHostCerts
	I1212 01:35:09.234351  701970 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-487723/.minikube/ca.pem, removing ...
	I1212 01:35:09.234360  701970 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-487723/.minikube/ca.pem
	I1212 01:35:09.234500  701970 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22101-487723/.minikube/ca.pem (1078 bytes)
	I1212 01:35:09.234611  701970 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-487723/.minikube/cert.pem, removing ...
	I1212 01:35:09.234617  701970 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-487723/.minikube/cert.pem
	I1212 01:35:09.234646  701970 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22101-487723/.minikube/cert.pem (1123 bytes)
	I1212 01:35:09.234749  701970 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-487723/.minikube/key.pem, removing ...
	I1212 01:35:09.234756  701970 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-487723/.minikube/key.pem
	I1212 01:35:09.234840  701970 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22101-487723/.minikube/key.pem (1679 bytes)
	I1212 01:35:09.234911  701970 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22101-487723/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca-key.pem org=jenkins.pause-249141 san=[127.0.0.1 192.168.76.2 localhost minikube pause-249141]
	I1212 01:35:09.553158  701970 provision.go:177] copyRemoteCerts
	I1212 01:35:09.553254  701970 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 01:35:09.553318  701970 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-249141
	I1212 01:35:09.571662  701970 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33429 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/pause-249141/id_rsa Username:docker}
	I1212 01:35:09.682385  701970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 01:35:09.702271  701970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1212 01:35:09.721179  701970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 01:35:09.739202  701970 provision.go:87] duration metric: took 524.288476ms to configureAuth
	I1212 01:35:09.739231  701970 ubuntu.go:206] setting minikube options for container-runtime
	I1212 01:35:09.739494  701970 config.go:182] Loaded profile config "pause-249141": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 01:35:09.739616  701970 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-249141
	I1212 01:35:09.758855  701970 main.go:143] libmachine: Using SSH client type: native
	I1212 01:35:09.759181  701970 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33429 <nil> <nil>}
	I1212 01:35:09.759202  701970 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 01:35:15.173962  701970 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 01:35:15.173987  701970 machine.go:97] duration metric: took 6.471250779s to provisionDockerMachine
	I1212 01:35:15.173999  701970 start.go:293] postStartSetup for "pause-249141" (driver="docker")
	I1212 01:35:15.174011  701970 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 01:35:15.174086  701970 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 01:35:15.174167  701970 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-249141
	I1212 01:35:15.192958  701970 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33429 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/pause-249141/id_rsa Username:docker}
	I1212 01:35:15.306490  701970 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 01:35:15.309726  701970 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 01:35:15.309759  701970 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 01:35:15.309770  701970 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-487723/.minikube/addons for local assets ...
	I1212 01:35:15.309823  701970 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-487723/.minikube/files for local assets ...
	I1212 01:35:15.309908  701970 filesync.go:149] local asset: /home/jenkins/minikube-integration/22101-487723/.minikube/files/etc/ssl/certs/4909542.pem -> 4909542.pem in /etc/ssl/certs
	I1212 01:35:15.310013  701970 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 01:35:15.317444  701970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/files/etc/ssl/certs/4909542.pem --> /etc/ssl/certs/4909542.pem (1708 bytes)
	I1212 01:35:15.335679  701970 start.go:296] duration metric: took 161.664353ms for postStartSetup
	I1212 01:35:15.335761  701970 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 01:35:15.335802  701970 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-249141
	I1212 01:35:15.352776  701970 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33429 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/pause-249141/id_rsa Username:docker}
	I1212 01:35:15.460092  701970 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 01:35:15.465221  701970 fix.go:56] duration metric: took 6.784662865s for fixHost
	I1212 01:35:15.465247  701970 start.go:83] releasing machines lock for "pause-249141", held for 6.784720529s
	I1212 01:35:15.465329  701970 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-249141
	I1212 01:35:15.481623  701970 ssh_runner.go:195] Run: cat /version.json
	I1212 01:35:15.481682  701970 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-249141
	I1212 01:35:15.481756  701970 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 01:35:15.481808  701970 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-249141
	I1212 01:35:15.502033  701970 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33429 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/pause-249141/id_rsa Username:docker}
	I1212 01:35:15.525315  701970 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33429 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/pause-249141/id_rsa Username:docker}
	I1212 01:35:15.606488  701970 ssh_runner.go:195] Run: systemctl --version
	I1212 01:35:15.701505  701970 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 01:35:15.746139  701970 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 01:35:15.750583  701970 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 01:35:15.750665  701970 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 01:35:15.759698  701970 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 01:35:15.759727  701970 start.go:496] detecting cgroup driver to use...
	I1212 01:35:15.759758  701970 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 01:35:15.759827  701970 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 01:35:15.775864  701970 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 01:35:15.792084  701970 docker.go:218] disabling cri-docker service (if available) ...
	I1212 01:35:15.792190  701970 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 01:35:15.811171  701970 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 01:35:15.825204  701970 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 01:35:15.961890  701970 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 01:35:16.126672  701970 docker.go:234] disabling docker service ...
	I1212 01:35:16.126780  701970 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 01:35:16.143244  701970 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 01:35:16.157164  701970 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 01:35:16.291293  701970 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 01:35:16.438317  701970 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 01:35:16.451433  701970 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 01:35:16.465481  701970 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1212 01:35:16.465598  701970 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:35:16.474244  701970 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 01:35:16.474352  701970 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:35:16.483394  701970 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:35:16.492288  701970 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:35:16.501102  701970 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 01:35:16.509442  701970 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:35:16.518485  701970 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:35:16.526804  701970 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:35:16.535292  701970 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 01:35:16.542947  701970 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 01:35:16.550101  701970 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:35:16.681924  701970 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 01:35:16.929759  701970 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 01:35:16.929830  701970 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 01:35:16.935698  701970 start.go:564] Will wait 60s for crictl version
	I1212 01:35:16.935765  701970 ssh_runner.go:195] Run: which crictl
	I1212 01:35:16.939781  701970 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 01:35:16.972321  701970 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1212 01:35:16.972420  701970 ssh_runner.go:195] Run: crio --version
	I1212 01:35:17.018073  701970 ssh_runner.go:195] Run: crio --version
	I1212 01:35:17.060128  701970 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1212 01:35:16.655382  664006 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001443347s
	I1212 01:35:16.655414  664006 kubeadm.go:319] 
	I1212 01:35:16.655468  664006 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 01:35:16.655499  664006 kubeadm.go:319] 	- The kubelet is not running
	I1212 01:35:16.655598  664006 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 01:35:16.655603  664006 kubeadm.go:319] 
	I1212 01:35:16.655701  664006 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 01:35:16.655731  664006 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 01:35:16.655760  664006 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 01:35:16.655764  664006 kubeadm.go:319] 
	I1212 01:35:16.660115  664006 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1212 01:35:16.660546  664006 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1212 01:35:16.660662  664006 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 01:35:16.660928  664006 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1212 01:35:16.660940  664006 kubeadm.go:319] 
	I1212 01:35:16.661010  664006 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1212 01:35:16.661072  664006 kubeadm.go:403] duration metric: took 12m7.68470731s to StartCluster
	I1212 01:35:16.661109  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:35:16.661169  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:35:16.692829  664006 cri.go:89] found id: ""
	I1212 01:35:16.692861  664006 logs.go:282] 0 containers: []
	W1212 01:35:16.692870  664006 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:35:16.692876  664006 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:35:16.692941  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:35:16.732997  664006 cri.go:89] found id: ""
	I1212 01:35:16.733021  664006 logs.go:282] 0 containers: []
	W1212 01:35:16.733030  664006 logs.go:284] No container was found matching "etcd"
	I1212 01:35:16.733035  664006 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:35:16.733095  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:35:16.785196  664006 cri.go:89] found id: ""
	I1212 01:35:16.785218  664006 logs.go:282] 0 containers: []
	W1212 01:35:16.785227  664006 logs.go:284] No container was found matching "coredns"
	I1212 01:35:16.785233  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:35:16.785290  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:35:16.825764  664006 cri.go:89] found id: ""
	I1212 01:35:16.825785  664006 logs.go:282] 0 containers: []
	W1212 01:35:16.825793  664006 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:35:16.825800  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:35:16.825856  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:35:16.856866  664006 cri.go:89] found id: ""
	I1212 01:35:16.856888  664006 logs.go:282] 0 containers: []
	W1212 01:35:16.856896  664006 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:35:16.856902  664006 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:35:16.856961  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:35:16.886621  664006 cri.go:89] found id: ""
	I1212 01:35:16.886644  664006 logs.go:282] 0 containers: []
	W1212 01:35:16.886652  664006 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:35:16.886659  664006 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:35:16.886732  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:35:16.913954  664006 cri.go:89] found id: ""
	I1212 01:35:16.913980  664006 logs.go:282] 0 containers: []
	W1212 01:35:16.913989  664006 logs.go:284] No container was found matching "kindnet"
	I1212 01:35:16.913996  664006 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:35:16.914052  664006 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:35:16.951984  664006 cri.go:89] found id: ""
	I1212 01:35:16.952015  664006 logs.go:282] 0 containers: []
	W1212 01:35:16.952023  664006 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:35:16.952032  664006 logs.go:123] Gathering logs for kubelet ...
	I1212 01:35:16.952044  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:35:17.030761  664006 logs.go:123] Gathering logs for dmesg ...
	I1212 01:35:17.030802  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:35:17.049006  664006 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:35:17.049033  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:35:17.145593  664006 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:35:17.145617  664006 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:35:17.145629  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:35:17.188091  664006 logs.go:123] Gathering logs for container status ...
	I1212 01:35:17.188169  664006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 01:35:17.226761  664006 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001443347s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1212 01:35:17.226810  664006 out.go:285] * 
	W1212 01:35:17.226863  664006 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001443347s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 01:35:17.226880  664006 out.go:285] * 
	W1212 01:35:17.229004  664006 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 01:35:17.236108  664006 out.go:203] 
	W1212 01:35:17.238117  664006 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001443347s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 01:35:17.238179  664006 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1212 01:35:17.238203  664006 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1212 01:35:17.241596  664006 out.go:203] 
	I1212 01:35:17.063120  701970 cli_runner.go:164] Run: docker network inspect pause-249141 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 01:35:17.085011  701970 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1212 01:35:17.091409  701970 kubeadm.go:884] updating cluster {Name:pause-249141 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-249141 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 01:35:17.091548  701970 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 01:35:17.091599  701970 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 01:35:17.136092  701970 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 01:35:17.136115  701970 crio.go:433] Images already preloaded, skipping extraction
	I1212 01:35:17.136179  701970 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 01:35:17.164467  701970 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 01:35:17.164542  701970 cache_images.go:86] Images are preloaded, skipping loading
	I1212 01:35:17.164563  701970 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.2 crio true true} ...
	I1212 01:35:17.164706  701970 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-249141 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:pause-249141 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 01:35:17.164838  701970 ssh_runner.go:195] Run: crio config
	I1212 01:35:17.261400  701970 cni.go:84] Creating CNI manager for ""
	I1212 01:35:17.261427  701970 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 01:35:17.261449  701970 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 01:35:17.261473  701970 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-249141 NodeName:pause-249141 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 01:35:17.261604  701970 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-249141"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 01:35:17.261676  701970 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1212 01:35:17.279353  701970 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 01:35:17.279429  701970 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 01:35:17.290436  701970 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1212 01:35:17.305478  701970 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 01:35:17.321678  701970 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1212 01:35:17.338908  701970 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1212 01:35:17.342745  701970 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:35:17.535093  701970 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 01:35:17.567261  701970 certs.go:69] Setting up /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/pause-249141 for IP: 192.168.76.2
	I1212 01:35:17.567280  701970 certs.go:195] generating shared ca certs ...
	I1212 01:35:17.567296  701970 certs.go:227] acquiring lock for ca certs: {Name:mk856824cf2126fa3d2975ef18e195b6ab1234f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:35:17.567439  701970 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22101-487723/.minikube/ca.key
	I1212 01:35:17.567481  701970 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22101-487723/.minikube/proxy-client-ca.key
	I1212 01:35:17.567489  701970 certs.go:257] generating profile certs ...
	I1212 01:35:17.567585  701970 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/pause-249141/client.key
	I1212 01:35:17.567641  701970 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/pause-249141/apiserver.key.1af0cdd4
	I1212 01:35:17.567683  701970 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/pause-249141/proxy-client.key
	I1212 01:35:17.567793  701970 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/490954.pem (1338 bytes)
	W1212 01:35:17.567825  701970 certs.go:480] ignoring /home/jenkins/minikube-integration/22101-487723/.minikube/certs/490954_empty.pem, impossibly tiny 0 bytes
	I1212 01:35:17.567833  701970 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca-key.pem (1679 bytes)
	I1212 01:35:17.567887  701970 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca.pem (1078 bytes)
	I1212 01:35:17.567913  701970 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/cert.pem (1123 bytes)
	I1212 01:35:17.567941  701970 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/key.pem (1679 bytes)
	I1212 01:35:17.567995  701970 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-487723/.minikube/files/etc/ssl/certs/4909542.pem (1708 bytes)
	I1212 01:35:17.568608  701970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 01:35:17.587217  701970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 01:35:17.615738  701970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 01:35:17.639077  701970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 01:35:17.662014  701970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/pause-249141/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1212 01:35:17.685403  701970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/pause-249141/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 01:35:17.712116  701970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/pause-249141/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 01:35:17.737467  701970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/pause-249141/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 01:35:17.764904  701970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/files/etc/ssl/certs/4909542.pem --> /usr/share/ca-certificates/4909542.pem (1708 bytes)
	I1212 01:35:17.785124  701970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 01:35:17.807519  701970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/certs/490954.pem --> /usr/share/ca-certificates/490954.pem (1338 bytes)
	I1212 01:35:17.824939  701970 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 01:35:17.838230  701970 ssh_runner.go:195] Run: openssl version
	I1212 01:35:17.845386  701970 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4909542.pem
	I1212 01:35:17.853346  701970 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4909542.pem /etc/ssl/certs/4909542.pem
	I1212 01:35:17.861264  701970 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4909542.pem
	I1212 01:35:17.865964  701970 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 00:21 /usr/share/ca-certificates/4909542.pem
	I1212 01:35:17.866049  701970 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4909542.pem
	I1212 01:35:17.911396  701970 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 01:35:17.920463  701970 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:35:17.929074  701970 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 01:35:17.937969  701970 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:35:17.943074  701970 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 00:11 /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:35:17.943170  701970 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:35:17.996053  701970 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 01:35:18.008766  701970 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/490954.pem
	I1212 01:35:18.020858  701970 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/490954.pem /etc/ssl/certs/490954.pem
	I1212 01:35:18.032119  701970 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/490954.pem
	I1212 01:35:18.039884  701970 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 00:21 /usr/share/ca-certificates/490954.pem
	I1212 01:35:18.039963  701970 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/490954.pem
	I1212 01:35:18.097204  701970 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 01:35:18.107325  701970 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 01:35:18.112519  701970 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 01:35:18.164641  701970 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 01:35:18.216209  701970 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 01:35:18.265592  701970 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 01:35:18.308990  701970 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 01:35:18.363559  701970 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 01:35:18.405183  701970 kubeadm.go:401] StartCluster: {Name:pause-249141 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-249141 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 01:35:18.405298  701970 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 01:35:18.405350  701970 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 01:35:18.452932  701970 cri.go:89] found id: "c771bc06bce8f82dca79bdb4ca387282bc81275d57a564d32a9a0697a87b2d17"
	I1212 01:35:18.452950  701970 cri.go:89] found id: "45aec66ba1df15580fb0dc430c9ca8833004e5cfc642d8cb7fbac231cbdf9574"
	I1212 01:35:18.452955  701970 cri.go:89] found id: "e704c7281fcb74b9686f6baa06351c26b69648ee741e499cac05da9256535e0a"
	I1212 01:35:18.452959  701970 cri.go:89] found id: "595d301c32a6e5bcd377b0b53fca8b556fd8a0a9887c54cb4dbc74fae9bc5012"
	I1212 01:35:18.452962  701970 cri.go:89] found id: "adb8f76bff1a17e46695b9272558293767fdfe57f417ac8f86c718a1507b74ce"
	I1212 01:35:18.452965  701970 cri.go:89] found id: "93423cf437a5df66043e9d8c67f967f73a3aba50c58c120f3210fc19fee0e72a"
	I1212 01:35:18.452968  701970 cri.go:89] found id: "3ec4d221661b7f77facbb9727ed33f4f819f2dacde148a40f053f0a805768ac9"
	I1212 01:35:18.452971  701970 cri.go:89] found id: ""
	I1212 01:35:18.453031  701970 ssh_runner.go:195] Run: sudo runc list -f json
	W1212 01:35:18.474158  701970 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T01:35:18Z" level=error msg="open /run/runc: no such file or directory"
	I1212 01:35:18.474242  701970 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 01:35:18.488304  701970 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1212 01:35:18.488321  701970 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1212 01:35:18.488376  701970 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 01:35:18.498958  701970 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 01:35:18.499651  701970 kubeconfig.go:125] found "pause-249141" server: "https://192.168.76.2:8443"
	I1212 01:35:18.500471  701970 kapi.go:59] client config for pause-249141: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22101-487723/.minikube/profiles/pause-249141/client.crt", KeyFile:"/home/jenkins/minikube-integration/22101-487723/.minikube/profiles/pause-249141/client.key", CAFile:"/home/jenkins/minikube-integration/22101-487723/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 01:35:18.500938  701970 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1212 01:35:18.500950  701970 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1212 01:35:18.500956  701970 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1212 01:35:18.500961  701970 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1212 01:35:18.500965  701970 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1212 01:35:18.501232  701970 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 01:35:18.512511  701970 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1212 01:35:18.512539  701970 kubeadm.go:602] duration metric: took 24.212504ms to restartPrimaryControlPlane
	I1212 01:35:18.512548  701970 kubeadm.go:403] duration metric: took 107.374331ms to StartCluster
	I1212 01:35:18.512562  701970 settings.go:142] acquiring lock: {Name:mk274c10b2238dc32d72b68ac2e1ec517b8a72b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:35:18.512619  701970 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22101-487723/kubeconfig
	I1212 01:35:18.513444  701970 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-487723/kubeconfig: {Name:mk40d877648a1b47389942ad828ec218ac64f642 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:35:18.513651  701970 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 01:35:18.513997  701970 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 01:35:18.514556  701970 config.go:182] Loaded profile config "pause-249141": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 01:35:18.516932  701970 out.go:179] * Enabled addons: 
	I1212 01:35:18.517040  701970 out.go:179] * Verifying Kubernetes components...
	
	
	==> CRI-O <==
	Dec 12 01:23:03 kubernetes-upgrade-224473 crio[614]: time="2025-12-12T01:23:03.743656842Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 12 01:23:03 kubernetes-upgrade-224473 crio[614]: time="2025-12-12T01:23:03.743693518Z" level=info msg="Starting seccomp notifier watcher"
	Dec 12 01:23:03 kubernetes-upgrade-224473 crio[614]: time="2025-12-12T01:23:03.743738308Z" level=info msg="Create NRI interface"
	Dec 12 01:23:03 kubernetes-upgrade-224473 crio[614]: time="2025-12-12T01:23:03.743836816Z" level=info msg="built-in NRI default validator is disabled"
	Dec 12 01:23:03 kubernetes-upgrade-224473 crio[614]: time="2025-12-12T01:23:03.743844956Z" level=info msg="runtime interface created"
	Dec 12 01:23:03 kubernetes-upgrade-224473 crio[614]: time="2025-12-12T01:23:03.743857805Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 12 01:23:03 kubernetes-upgrade-224473 crio[614]: time="2025-12-12T01:23:03.743864254Z" level=info msg="runtime interface starting up..."
	Dec 12 01:23:03 kubernetes-upgrade-224473 crio[614]: time="2025-12-12T01:23:03.743870465Z" level=info msg="starting plugins..."
	Dec 12 01:23:03 kubernetes-upgrade-224473 crio[614]: time="2025-12-12T01:23:03.743883248Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 12 01:23:03 kubernetes-upgrade-224473 crio[614]: time="2025-12-12T01:23:03.743946918Z" level=info msg="No systemd watchdog enabled"
	Dec 12 01:23:03 kubernetes-upgrade-224473 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 12 01:27:11 kubernetes-upgrade-224473 crio[614]: time="2025-12-12T01:27:11.976572479Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=40b77cab-f98f-44f8-8cb5-d7cb6c85a0dd name=/runtime.v1.ImageService/ImageStatus
	Dec 12 01:27:11 kubernetes-upgrade-224473 crio[614]: time="2025-12-12T01:27:11.982476216Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=0d0cedff-0555-4f5b-bca4-f50f305a1a51 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 01:27:11 kubernetes-upgrade-224473 crio[614]: time="2025-12-12T01:27:11.983230101Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=dc9d8afd-e830-4500-8fea-af3b2a9556af name=/runtime.v1.ImageService/ImageStatus
	Dec 12 01:27:11 kubernetes-upgrade-224473 crio[614]: time="2025-12-12T01:27:11.98380863Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=3e4e5d5b-8b3e-47d3-b0d1-1a06f0f4d33b name=/runtime.v1.ImageService/ImageStatus
	Dec 12 01:27:11 kubernetes-upgrade-224473 crio[614]: time="2025-12-12T01:27:11.984349703Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=18a0d3d2-48d6-47cd-97e2-3b60394fa020 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 01:27:11 kubernetes-upgrade-224473 crio[614]: time="2025-12-12T01:27:11.98486937Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=4161e525-de2c-4eee-8107-96cc6a573940 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 01:27:11 kubernetes-upgrade-224473 crio[614]: time="2025-12-12T01:27:11.985378322Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=f04f4d98-03ee-4f7f-bafb-973d5cfafe78 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 01:31:14 kubernetes-upgrade-224473 crio[614]: time="2025-12-12T01:31:14.41756805Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=ae13a042-ec02-42c0-97c8-bc57947711fb name=/runtime.v1.ImageService/ImageStatus
	Dec 12 01:31:14 kubernetes-upgrade-224473 crio[614]: time="2025-12-12T01:31:14.418317636Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=c3db75b7-087a-4f71-b122-4b7ac874d6fe name=/runtime.v1.ImageService/ImageStatus
	Dec 12 01:31:14 kubernetes-upgrade-224473 crio[614]: time="2025-12-12T01:31:14.41890254Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=70b74b9a-a156-4253-b728-a6f68e252924 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 01:31:14 kubernetes-upgrade-224473 crio[614]: time="2025-12-12T01:31:14.419426105Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=c99534bd-dfeb-438b-9828-fd4c0eb50a29 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 01:31:14 kubernetes-upgrade-224473 crio[614]: time="2025-12-12T01:31:14.419866398Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=d3532e73-89b6-417a-8753-9a02576e42ee name=/runtime.v1.ImageService/ImageStatus
	Dec 12 01:31:14 kubernetes-upgrade-224473 crio[614]: time="2025-12-12T01:31:14.42038947Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=e1254188-8907-42ec-be20-3e152854169e name=/runtime.v1.ImageService/ImageStatus
	Dec 12 01:31:14 kubernetes-upgrade-224473 crio[614]: time="2025-12-12T01:31:14.420807109Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=2de3df6f-b25d-428b-a681-3c8278dff669 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +3.673413] overlayfs: idmapped layers are currently not supported
	[ +34.404177] overlayfs: idmapped layers are currently not supported
	[Dec12 00:59] overlayfs: idmapped layers are currently not supported
	[Dec12 01:00] overlayfs: idmapped layers are currently not supported
	[  +2.854463] overlayfs: idmapped layers are currently not supported
	[Dec12 01:01] overlayfs: idmapped layers are currently not supported
	[Dec12 01:02] overlayfs: idmapped layers are currently not supported
	[Dec12 01:03] overlayfs: idmapped layers are currently not supported
	[Dec12 01:08] overlayfs: idmapped layers are currently not supported
	[ +34.061772] overlayfs: idmapped layers are currently not supported
	[Dec12 01:09] overlayfs: idmapped layers are currently not supported
	[Dec12 01:11] overlayfs: idmapped layers are currently not supported
	[Dec12 01:12] overlayfs: idmapped layers are currently not supported
	[Dec12 01:13] overlayfs: idmapped layers are currently not supported
	[Dec12 01:14] overlayfs: idmapped layers are currently not supported
	[  +1.592007] overlayfs: idmapped layers are currently not supported
	[Dec12 01:15] overlayfs: idmapped layers are currently not supported
	[ +24.197582] overlayfs: idmapped layers are currently not supported
	[Dec12 01:16] overlayfs: idmapped layers are currently not supported
	[ +26.194679] overlayfs: idmapped layers are currently not supported
	[Dec12 01:17] overlayfs: idmapped layers are currently not supported
	[Dec12 01:18] overlayfs: idmapped layers are currently not supported
	[Dec12 01:21] overlayfs: idmapped layers are currently not supported
	[Dec12 01:22] overlayfs: idmapped layers are currently not supported
	[Dec12 01:34] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 01:35:19 up  4:17,  0 user,  load average: 1.19, 1.18, 1.50
	Linux kubernetes-upgrade-224473 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 12 01:35:17 kubernetes-upgrade-224473 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:35:17 kubernetes-upgrade-224473 kubelet[12243]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 01:35:17 kubernetes-upgrade-224473 kubelet[12243]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 01:35:17 kubernetes-upgrade-224473 kubelet[12243]: E1212 01:35:17.616848   12243 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 01:35:17 kubernetes-upgrade-224473 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 01:35:17 kubernetes-upgrade-224473 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 01:35:18 kubernetes-upgrade-224473 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 961.
	Dec 12 01:35:18 kubernetes-upgrade-224473 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:35:18 kubernetes-upgrade-224473 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:35:18 kubernetes-upgrade-224473 kubelet[12249]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 01:35:18 kubernetes-upgrade-224473 kubelet[12249]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 01:35:18 kubernetes-upgrade-224473 kubelet[12249]: E1212 01:35:18.566799   12249 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 01:35:18 kubernetes-upgrade-224473 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 01:35:18 kubernetes-upgrade-224473 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 01:35:19 kubernetes-upgrade-224473 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 962.
	Dec 12 01:35:19 kubernetes-upgrade-224473 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:35:19 kubernetes-upgrade-224473 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:35:19 kubernetes-upgrade-224473 kubelet[12277]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 01:35:19 kubernetes-upgrade-224473 kubelet[12277]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 01:35:19 kubernetes-upgrade-224473 kubelet[12277]: E1212 01:35:19.324370   12277 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 01:35:19 kubernetes-upgrade-224473 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 01:35:19 kubernetes-upgrade-224473 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 01:35:19 kubernetes-upgrade-224473 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 963.
	Dec 12 01:35:19 kubernetes-upgrade-224473 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:35:19 kubernetes-upgrade-224473 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p kubernetes-upgrade-224473 -n kubernetes-upgrade-224473
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p kubernetes-upgrade-224473 -n kubernetes-upgrade-224473: exit status 2 (478.410696ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "kubernetes-upgrade-224473" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:176: Cleaning up "kubernetes-upgrade-224473" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-224473
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-224473: (2.752788454s)
--- FAIL: TestKubernetesUpgrade (782.31s)

                                                
                                    
x
+
TestPause/serial/Pause (8.33s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-249141 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p pause-249141 --alsologtostderr -v=5: exit status 80 (2.109105358s)

                                                
                                                
-- stdout --
	* Pausing node pause-249141 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 01:35:37.320042  705534 out.go:360] Setting OutFile to fd 1 ...
	I1212 01:35:37.320208  705534 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 01:35:37.320215  705534 out.go:374] Setting ErrFile to fd 2...
	I1212 01:35:37.320220  705534 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 01:35:37.320469  705534 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-487723/.minikube/bin
	I1212 01:35:37.320704  705534 out.go:368] Setting JSON to false
	I1212 01:35:37.320721  705534 mustload.go:66] Loading cluster: pause-249141
	I1212 01:35:37.321193  705534 config.go:182] Loaded profile config "pause-249141": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 01:35:37.321663  705534 cli_runner.go:164] Run: docker container inspect pause-249141 --format={{.State.Status}}
	I1212 01:35:37.356111  705534 host.go:66] Checking if "pause-249141" exists ...
	I1212 01:35:37.356479  705534 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 01:35:37.443984  705534 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:63 SystemTime:2025-12-12 01:35:37.433095602 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 01:35:37.444645  705534 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765481609-22101/minikube-v1.37.0-1765481609-22101-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765481609-22101-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-249141 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1212 01:35:37.448145  705534 out.go:179] * Pausing node pause-249141 ... 
	I1212 01:35:37.452641  705534 host.go:66] Checking if "pause-249141" exists ...
	I1212 01:35:37.452975  705534 ssh_runner.go:195] Run: systemctl --version
	I1212 01:35:37.453023  705534 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-249141
	I1212 01:35:37.488778  705534 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33429 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/pause-249141/id_rsa Username:docker}
	I1212 01:35:37.599673  705534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 01:35:37.616960  705534 pause.go:52] kubelet running: true
	I1212 01:35:37.617047  705534 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1212 01:35:37.889820  705534 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1212 01:35:37.889936  705534 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1212 01:35:37.970015  705534 cri.go:89] found id: "5b492324a19d7368866c313237fbae39b56ac6a28750bbb96d9a4aab8764b945"
	I1212 01:35:37.970042  705534 cri.go:89] found id: "86eb237121dbb40df2f9508434a7b2d0bb0bbb9a812e6b01a538de3d0c1b435a"
	I1212 01:35:37.970047  705534 cri.go:89] found id: "550a496e315f16d7f7d774db26779f759f5db45901e0f542db49dd51ea30338e"
	I1212 01:35:37.970051  705534 cri.go:89] found id: "06ed94bd38e372eff5b039f17f5b555d55348a17a2f305a892bbefaccb2dff7d"
	I1212 01:35:37.970055  705534 cri.go:89] found id: "baa32678c2651eca75a4f6b1c2685393cdffab03f5c5961c36b3fc1b1a93db68"
	I1212 01:35:37.970058  705534 cri.go:89] found id: "e1c1abd5f32e6476b2cf029011d2249f974696e5db5ed40ae5e5686e0bd54717"
	I1212 01:35:37.970061  705534 cri.go:89] found id: "4e079fbf56172a4e7de82ff26b90282dce34661a3ad4a9b1340b61c9cbb4d555"
	I1212 01:35:37.970064  705534 cri.go:89] found id: "c771bc06bce8f82dca79bdb4ca387282bc81275d57a564d32a9a0697a87b2d17"
	I1212 01:35:37.970068  705534 cri.go:89] found id: "45aec66ba1df15580fb0dc430c9ca8833004e5cfc642d8cb7fbac231cbdf9574"
	I1212 01:35:37.970074  705534 cri.go:89] found id: "e704c7281fcb74b9686f6baa06351c26b69648ee741e499cac05da9256535e0a"
	I1212 01:35:37.970077  705534 cri.go:89] found id: "595d301c32a6e5bcd377b0b53fca8b556fd8a0a9887c54cb4dbc74fae9bc5012"
	I1212 01:35:37.970080  705534 cri.go:89] found id: "adb8f76bff1a17e46695b9272558293767fdfe57f417ac8f86c718a1507b74ce"
	I1212 01:35:37.970083  705534 cri.go:89] found id: "93423cf437a5df66043e9d8c67f967f73a3aba50c58c120f3210fc19fee0e72a"
	I1212 01:35:37.970087  705534 cri.go:89] found id: "3ec4d221661b7f77facbb9727ed33f4f819f2dacde148a40f053f0a805768ac9"
	I1212 01:35:37.970090  705534 cri.go:89] found id: ""
	I1212 01:35:37.970146  705534 ssh_runner.go:195] Run: sudo runc list -f json
	I1212 01:35:37.981523  705534 retry.go:31] will retry after 321.247919ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T01:35:37Z" level=error msg="open /run/runc: no such file or directory"
	I1212 01:35:38.303931  705534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 01:35:38.319136  705534 pause.go:52] kubelet running: false
	I1212 01:35:38.319217  705534 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1212 01:35:38.500551  705534 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1212 01:35:38.500641  705534 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1212 01:35:38.598449  705534 cri.go:89] found id: "5b492324a19d7368866c313237fbae39b56ac6a28750bbb96d9a4aab8764b945"
	I1212 01:35:38.598482  705534 cri.go:89] found id: "86eb237121dbb40df2f9508434a7b2d0bb0bbb9a812e6b01a538de3d0c1b435a"
	I1212 01:35:38.598503  705534 cri.go:89] found id: "550a496e315f16d7f7d774db26779f759f5db45901e0f542db49dd51ea30338e"
	I1212 01:35:38.598508  705534 cri.go:89] found id: "06ed94bd38e372eff5b039f17f5b555d55348a17a2f305a892bbefaccb2dff7d"
	I1212 01:35:38.598511  705534 cri.go:89] found id: "baa32678c2651eca75a4f6b1c2685393cdffab03f5c5961c36b3fc1b1a93db68"
	I1212 01:35:38.598515  705534 cri.go:89] found id: "e1c1abd5f32e6476b2cf029011d2249f974696e5db5ed40ae5e5686e0bd54717"
	I1212 01:35:38.598518  705534 cri.go:89] found id: "4e079fbf56172a4e7de82ff26b90282dce34661a3ad4a9b1340b61c9cbb4d555"
	I1212 01:35:38.598522  705534 cri.go:89] found id: "c771bc06bce8f82dca79bdb4ca387282bc81275d57a564d32a9a0697a87b2d17"
	I1212 01:35:38.598524  705534 cri.go:89] found id: "45aec66ba1df15580fb0dc430c9ca8833004e5cfc642d8cb7fbac231cbdf9574"
	I1212 01:35:38.598539  705534 cri.go:89] found id: "e704c7281fcb74b9686f6baa06351c26b69648ee741e499cac05da9256535e0a"
	I1212 01:35:38.598545  705534 cri.go:89] found id: "595d301c32a6e5bcd377b0b53fca8b556fd8a0a9887c54cb4dbc74fae9bc5012"
	I1212 01:35:38.598549  705534 cri.go:89] found id: "adb8f76bff1a17e46695b9272558293767fdfe57f417ac8f86c718a1507b74ce"
	I1212 01:35:38.598552  705534 cri.go:89] found id: "93423cf437a5df66043e9d8c67f967f73a3aba50c58c120f3210fc19fee0e72a"
	I1212 01:35:38.598555  705534 cri.go:89] found id: "3ec4d221661b7f77facbb9727ed33f4f819f2dacde148a40f053f0a805768ac9"
	I1212 01:35:38.598561  705534 cri.go:89] found id: ""
	I1212 01:35:38.598618  705534 ssh_runner.go:195] Run: sudo runc list -f json
	I1212 01:35:38.616565  705534 retry.go:31] will retry after 404.54969ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T01:35:38Z" level=error msg="open /run/runc: no such file or directory"
	I1212 01:35:39.022230  705534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 01:35:39.039205  705534 pause.go:52] kubelet running: false
	I1212 01:35:39.039289  705534 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1212 01:35:39.234324  705534 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1212 01:35:39.234403  705534 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1212 01:35:39.316355  705534 cri.go:89] found id: "5b492324a19d7368866c313237fbae39b56ac6a28750bbb96d9a4aab8764b945"
	I1212 01:35:39.316432  705534 cri.go:89] found id: "86eb237121dbb40df2f9508434a7b2d0bb0bbb9a812e6b01a538de3d0c1b435a"
	I1212 01:35:39.316454  705534 cri.go:89] found id: "550a496e315f16d7f7d774db26779f759f5db45901e0f542db49dd51ea30338e"
	I1212 01:35:39.316472  705534 cri.go:89] found id: "06ed94bd38e372eff5b039f17f5b555d55348a17a2f305a892bbefaccb2dff7d"
	I1212 01:35:39.316486  705534 cri.go:89] found id: "baa32678c2651eca75a4f6b1c2685393cdffab03f5c5961c36b3fc1b1a93db68"
	I1212 01:35:39.316511  705534 cri.go:89] found id: "e1c1abd5f32e6476b2cf029011d2249f974696e5db5ed40ae5e5686e0bd54717"
	I1212 01:35:39.316528  705534 cri.go:89] found id: "4e079fbf56172a4e7de82ff26b90282dce34661a3ad4a9b1340b61c9cbb4d555"
	I1212 01:35:39.316544  705534 cri.go:89] found id: "c771bc06bce8f82dca79bdb4ca387282bc81275d57a564d32a9a0697a87b2d17"
	I1212 01:35:39.316560  705534 cri.go:89] found id: "45aec66ba1df15580fb0dc430c9ca8833004e5cfc642d8cb7fbac231cbdf9574"
	I1212 01:35:39.316579  705534 cri.go:89] found id: "e704c7281fcb74b9686f6baa06351c26b69648ee741e499cac05da9256535e0a"
	I1212 01:35:39.316596  705534 cri.go:89] found id: "595d301c32a6e5bcd377b0b53fca8b556fd8a0a9887c54cb4dbc74fae9bc5012"
	I1212 01:35:39.316610  705534 cri.go:89] found id: "adb8f76bff1a17e46695b9272558293767fdfe57f417ac8f86c718a1507b74ce"
	I1212 01:35:39.316626  705534 cri.go:89] found id: "93423cf437a5df66043e9d8c67f967f73a3aba50c58c120f3210fc19fee0e72a"
	I1212 01:35:39.316645  705534 cri.go:89] found id: "3ec4d221661b7f77facbb9727ed33f4f819f2dacde148a40f053f0a805768ac9"
	I1212 01:35:39.316665  705534 cri.go:89] found id: ""
	I1212 01:35:39.316729  705534 ssh_runner.go:195] Run: sudo runc list -f json
	I1212 01:35:39.336462  705534 out.go:203] 
	W1212 01:35:39.339435  705534 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T01:35:39Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T01:35:39Z" level=error msg="open /run/runc: no such file or directory"
	
	W1212 01:35:39.339658  705534 out.go:285] * 
	* 
	W1212 01:35:39.347722  705534 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 01:35:39.352583  705534 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-arm64 pause -p pause-249141 --alsologtostderr -v=5" : exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect pause-249141
helpers_test.go:244: (dbg) docker inspect pause-249141:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "88cbebe85a31326ae544edffcecf680d495783edccaed9f3573511d3ed8ab2b8",
	        "Created": "2025-12-12T01:33:54.361995773Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 699411,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T01:33:54.424017771Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/88cbebe85a31326ae544edffcecf680d495783edccaed9f3573511d3ed8ab2b8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/88cbebe85a31326ae544edffcecf680d495783edccaed9f3573511d3ed8ab2b8/hostname",
	        "HostsPath": "/var/lib/docker/containers/88cbebe85a31326ae544edffcecf680d495783edccaed9f3573511d3ed8ab2b8/hosts",
	        "LogPath": "/var/lib/docker/containers/88cbebe85a31326ae544edffcecf680d495783edccaed9f3573511d3ed8ab2b8/88cbebe85a31326ae544edffcecf680d495783edccaed9f3573511d3ed8ab2b8-json.log",
	        "Name": "/pause-249141",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-249141:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-249141",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "88cbebe85a31326ae544edffcecf680d495783edccaed9f3573511d3ed8ab2b8",
	                "LowerDir": "/var/lib/docker/overlay2/9d2306f05d3ee51b32c40c92d5a66603d62319276a100bce921d8746bdffa8d7-init/diff:/var/lib/docker/overlay2/312acdcca8c5c90ada236fa0dd866f841348e5b8485928af37d3628cccc20197/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9d2306f05d3ee51b32c40c92d5a66603d62319276a100bce921d8746bdffa8d7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9d2306f05d3ee51b32c40c92d5a66603d62319276a100bce921d8746bdffa8d7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9d2306f05d3ee51b32c40c92d5a66603d62319276a100bce921d8746bdffa8d7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-249141",
	                "Source": "/var/lib/docker/volumes/pause-249141/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-249141",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-249141",
	                "name.minikube.sigs.k8s.io": "pause-249141",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4781a30b88a5045b0c4548e2c037c0f1fe8d0c6a5495f7cce10d21a92ca73841",
	            "SandboxKey": "/var/run/docker/netns/4781a30b88a5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33429"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33430"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33433"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33431"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33432"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-249141": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f2:e1:2a:b5:24:4b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "71b1cfc47b2b92d2839fa9051327318c5a1e86d53e80da924ee32224e2285d94",
	                    "EndpointID": "d90ed1690d0e1fc02e469afcc2ba48742dd94c90960c24153de9c1a6b9d72800",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-249141",
	                        "88cbebe85a31"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-249141 -n pause-249141
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-249141 -n pause-249141: exit status 2 (460.835385ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p pause-249141 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p pause-249141 logs -n 25: (1.724336258s)
helpers_test.go:261: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                      ARGS                                                                       │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-869533 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                           │ NoKubernetes-869533       │ jenkins │ v1.37.0 │ 12 Dec 25 01:21 UTC │ 12 Dec 25 01:23 UTC │
	│ start   │ -p missing-upgrade-812493 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ missing-upgrade-812493    │ jenkins │ v1.37.0 │ 12 Dec 25 01:21 UTC │ 12 Dec 25 01:22 UTC │
	│ delete  │ -p missing-upgrade-812493                                                                                                                       │ missing-upgrade-812493    │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │ 12 Dec 25 01:22 UTC │
	│ start   │ -p kubernetes-upgrade-224473 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio        │ kubernetes-upgrade-224473 │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │ 12 Dec 25 01:22 UTC │
	│ stop    │ -p kubernetes-upgrade-224473                                                                                                                    │ kubernetes-upgrade-224473 │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │ 12 Dec 25 01:22 UTC │
	│ start   │ -p kubernetes-upgrade-224473 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-224473 │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │                     │
	│ delete  │ -p NoKubernetes-869533                                                                                                                          │ NoKubernetes-869533       │ jenkins │ v1.37.0 │ 12 Dec 25 01:23 UTC │ 12 Dec 25 01:23 UTC │
	│ start   │ -p NoKubernetes-869533 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                           │ NoKubernetes-869533       │ jenkins │ v1.37.0 │ 12 Dec 25 01:23 UTC │ 12 Dec 25 01:23 UTC │
	│ ssh     │ -p NoKubernetes-869533 sudo systemctl is-active --quiet service kubelet                                                                         │ NoKubernetes-869533       │ jenkins │ v1.37.0 │ 12 Dec 25 01:23 UTC │                     │
	│ stop    │ -p NoKubernetes-869533                                                                                                                          │ NoKubernetes-869533       │ jenkins │ v1.37.0 │ 12 Dec 25 01:23 UTC │ 12 Dec 25 01:23 UTC │
	│ start   │ -p NoKubernetes-869533 --driver=docker  --container-runtime=crio                                                                                │ NoKubernetes-869533       │ jenkins │ v1.37.0 │ 12 Dec 25 01:23 UTC │ 12 Dec 25 01:23 UTC │
	│ ssh     │ -p NoKubernetes-869533 sudo systemctl is-active --quiet service kubelet                                                                         │ NoKubernetes-869533       │ jenkins │ v1.37.0 │ 12 Dec 25 01:23 UTC │                     │
	│ delete  │ -p NoKubernetes-869533                                                                                                                          │ NoKubernetes-869533       │ jenkins │ v1.37.0 │ 12 Dec 25 01:23 UTC │ 12 Dec 25 01:23 UTC │
	│ start   │ -p stopped-upgrade-204630 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                            │ stopped-upgrade-204630    │ jenkins │ v1.35.0 │ 12 Dec 25 01:23 UTC │ 12 Dec 25 01:24 UTC │
	│ stop    │ stopped-upgrade-204630 stop                                                                                                                     │ stopped-upgrade-204630    │ jenkins │ v1.35.0 │ 12 Dec 25 01:24 UTC │ 12 Dec 25 01:24 UTC │
	│ start   │ -p stopped-upgrade-204630 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ stopped-upgrade-204630    │ jenkins │ v1.37.0 │ 12 Dec 25 01:24 UTC │ 12 Dec 25 01:28 UTC │
	│ delete  │ -p stopped-upgrade-204630                                                                                                                       │ stopped-upgrade-204630    │ jenkins │ v1.37.0 │ 12 Dec 25 01:28 UTC │ 12 Dec 25 01:28 UTC │
	│ start   │ -p running-upgrade-260319 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                            │ running-upgrade-260319    │ jenkins │ v1.35.0 │ 12 Dec 25 01:28 UTC │ 12 Dec 25 01:29 UTC │
	│ start   │ -p running-upgrade-260319 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ running-upgrade-260319    │ jenkins │ v1.37.0 │ 12 Dec 25 01:29 UTC │ 12 Dec 25 01:33 UTC │
	│ delete  │ -p running-upgrade-260319                                                                                                                       │ running-upgrade-260319    │ jenkins │ v1.37.0 │ 12 Dec 25 01:33 UTC │ 12 Dec 25 01:33 UTC │
	│ start   │ -p pause-249141 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                       │ pause-249141              │ jenkins │ v1.37.0 │ 12 Dec 25 01:33 UTC │ 12 Dec 25 01:35 UTC │
	│ start   │ -p pause-249141 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                │ pause-249141              │ jenkins │ v1.37.0 │ 12 Dec 25 01:35 UTC │ 12 Dec 25 01:35 UTC │
	│ delete  │ -p kubernetes-upgrade-224473                                                                                                                    │ kubernetes-upgrade-224473 │ jenkins │ v1.37.0 │ 12 Dec 25 01:35 UTC │ 12 Dec 25 01:35 UTC │
	│ start   │ -p force-systemd-flag-272786 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                     │ force-systemd-flag-272786 │ jenkins │ v1.37.0 │ 12 Dec 25 01:35 UTC │                     │
	│ pause   │ -p pause-249141 --alsologtostderr -v=5                                                                                                          │ pause-249141              │ jenkins │ v1.37.0 │ 12 Dec 25 01:35 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 01:35:23
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 01:35:23.450987  703668 out.go:360] Setting OutFile to fd 1 ...
	I1212 01:35:23.451102  703668 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 01:35:23.451112  703668 out.go:374] Setting ErrFile to fd 2...
	I1212 01:35:23.451117  703668 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 01:35:23.451388  703668 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-487723/.minikube/bin
	I1212 01:35:23.451777  703668 out.go:368] Setting JSON to false
	I1212 01:35:23.452707  703668 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":15469,"bootTime":1765487855,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1212 01:35:23.452775  703668 start.go:143] virtualization:  
	I1212 01:35:23.457144  703668 out.go:179] * [force-systemd-flag-272786] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 01:35:23.460786  703668 out.go:179]   - MINIKUBE_LOCATION=22101
	I1212 01:35:23.460950  703668 notify.go:221] Checking for updates...
	I1212 01:35:23.468238  703668 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 01:35:23.471751  703668 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22101-487723/kubeconfig
	I1212 01:35:23.474931  703668 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-487723/.minikube
	I1212 01:35:23.478269  703668 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 01:35:23.481424  703668 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 01:35:23.485225  703668 config.go:182] Loaded profile config "pause-249141": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 01:35:23.485382  703668 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 01:35:23.539677  703668 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 01:35:23.539810  703668 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 01:35:23.651743  703668 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-12 01:35:23.636314483 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 01:35:23.651845  703668 docker.go:319] overlay module found
	I1212 01:35:23.655237  703668 out.go:179] * Using the docker driver based on user configuration
	I1212 01:35:23.658208  703668 start.go:309] selected driver: docker
	I1212 01:35:23.658223  703668 start.go:927] validating driver "docker" against <nil>
	I1212 01:35:23.658235  703668 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 01:35:23.658938  703668 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 01:35:23.785346  703668 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-12 01:35:23.773402814 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 01:35:23.785502  703668 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1212 01:35:23.785709  703668 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1212 01:35:23.789062  703668 out.go:179] * Using Docker driver with root privileges
	I1212 01:35:23.792143  703668 cni.go:84] Creating CNI manager for ""
	I1212 01:35:23.792210  703668 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 01:35:23.792218  703668 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1212 01:35:23.792297  703668 start.go:353] cluster config:
	{Name:force-systemd-flag-272786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:force-systemd-flag-272786 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 01:35:23.795599  703668 out.go:179] * Starting "force-systemd-flag-272786" primary control-plane node in "force-systemd-flag-272786" cluster
	I1212 01:35:23.798488  703668 cache.go:134] Beginning downloading kic base image for docker with crio
	I1212 01:35:23.801379  703668 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1212 01:35:23.804150  703668 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 01:35:23.804193  703668 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22101-487723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1212 01:35:23.804214  703668 cache.go:65] Caching tarball of preloaded images
	I1212 01:35:23.804219  703668 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1212 01:35:23.804301  703668 preload.go:238] Found /home/jenkins/minikube-integration/22101-487723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1212 01:35:23.804312  703668 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1212 01:35:23.804443  703668 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/force-systemd-flag-272786/config.json ...
	I1212 01:35:23.804462  703668 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/force-systemd-flag-272786/config.json: {Name:mk0235976c494188d16fb0f0a2c8ac936db70904 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:35:23.832530  703668 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1212 01:35:23.832555  703668 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1212 01:35:23.832571  703668 cache.go:243] Successfully downloaded all kic artifacts
	I1212 01:35:23.832599  703668 start.go:360] acquireMachinesLock for force-systemd-flag-272786: {Name:mkd74d51b820f71e81e014063b2059fe6d4e2a6d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 01:35:23.832707  703668 start.go:364] duration metric: took 87.481µs to acquireMachinesLock for "force-systemd-flag-272786"
	I1212 01:35:23.832739  703668 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-272786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:force-systemd-flag-272786 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 01:35:23.832818  703668 start.go:125] createHost starting for "" (driver="docker")
	I1212 01:35:26.320874  701970 node_ready.go:49] node "pause-249141" is "Ready"
	I1212 01:35:26.320900  701970 node_ready.go:38] duration metric: took 7.385593977s for node "pause-249141" to be "Ready" ...
	I1212 01:35:26.320918  701970 api_server.go:52] waiting for apiserver process to appear ...
	I1212 01:35:26.320979  701970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:26.341021  701970 api_server.go:72] duration metric: took 7.827340572s to wait for apiserver process to appear ...
	I1212 01:35:26.341043  701970 api_server.go:88] waiting for apiserver healthz status ...
	I1212 01:35:26.341062  701970 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1212 01:35:26.362458  701970 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 01:35:26.362529  701970 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 01:35:26.841763  701970 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1212 01:35:26.853739  701970 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 01:35:26.853809  701970 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 01:35:27.341196  701970 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1212 01:35:27.351169  701970 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 01:35:27.351208  701970 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 01:35:27.841916  701970 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1212 01:35:27.850409  701970 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1212 01:35:27.852386  701970 api_server.go:141] control plane version: v1.34.2
	I1212 01:35:27.852420  701970 api_server.go:131] duration metric: took 1.511369891s to wait for apiserver health ...
	I1212 01:35:27.852431  701970 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 01:35:27.857674  701970 system_pods.go:59] 7 kube-system pods found
	I1212 01:35:27.857714  701970 system_pods.go:61] "coredns-66bc5c9577-5jwqj" [1a666390-6ced-45ed-9bce-50a39727a9f8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 01:35:27.857724  701970 system_pods.go:61] "etcd-pause-249141" [3046d2d9-2d8b-44c1-8d60-385302cce3b1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 01:35:27.857729  701970 system_pods.go:61] "kindnet-5s7pz" [99856c46-d02d-48bc-95ee-225a3ead3606] Running
	I1212 01:35:27.857735  701970 system_pods.go:61] "kube-apiserver-pause-249141" [f453ee7d-27ab-4f08-86fb-7539d51d38ae] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 01:35:27.857742  701970 system_pods.go:61] "kube-controller-manager-pause-249141" [ed329836-67de-4e56-bd9f-7884d50f0656] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 01:35:27.857747  701970 system_pods.go:61] "kube-proxy-nlvxp" [3ccbdf76-9f38-49c5-b877-2bd169aabd0f] Running
	I1212 01:35:27.857752  701970 system_pods.go:61] "kube-scheduler-pause-249141" [3d3e7b33-960a-41c4-9d6a-c1e02d98ce86] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 01:35:27.857758  701970 system_pods.go:74] duration metric: took 5.321603ms to wait for pod list to return data ...
	I1212 01:35:27.857770  701970 default_sa.go:34] waiting for default service account to be created ...
	I1212 01:35:27.860325  701970 default_sa.go:45] found service account: "default"
	I1212 01:35:27.860351  701970 default_sa.go:55] duration metric: took 2.571835ms for default service account to be created ...
	I1212 01:35:27.860360  701970 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 01:35:27.863384  701970 system_pods.go:86] 7 kube-system pods found
	I1212 01:35:27.863431  701970 system_pods.go:89] "coredns-66bc5c9577-5jwqj" [1a666390-6ced-45ed-9bce-50a39727a9f8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 01:35:27.863447  701970 system_pods.go:89] "etcd-pause-249141" [3046d2d9-2d8b-44c1-8d60-385302cce3b1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 01:35:27.863454  701970 system_pods.go:89] "kindnet-5s7pz" [99856c46-d02d-48bc-95ee-225a3ead3606] Running
	I1212 01:35:27.863461  701970 system_pods.go:89] "kube-apiserver-pause-249141" [f453ee7d-27ab-4f08-86fb-7539d51d38ae] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 01:35:27.863473  701970 system_pods.go:89] "kube-controller-manager-pause-249141" [ed329836-67de-4e56-bd9f-7884d50f0656] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 01:35:27.863484  701970 system_pods.go:89] "kube-proxy-nlvxp" [3ccbdf76-9f38-49c5-b877-2bd169aabd0f] Running
	I1212 01:35:27.863490  701970 system_pods.go:89] "kube-scheduler-pause-249141" [3d3e7b33-960a-41c4-9d6a-c1e02d98ce86] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 01:35:27.863501  701970 system_pods.go:126] duration metric: took 3.134677ms to wait for k8s-apps to be running ...
	I1212 01:35:27.863509  701970 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 01:35:27.863577  701970 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 01:35:27.878753  701970 system_svc.go:56] duration metric: took 15.234841ms WaitForService to wait for kubelet
	I1212 01:35:27.878780  701970 kubeadm.go:587] duration metric: took 9.365104158s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 01:35:27.878797  701970 node_conditions.go:102] verifying NodePressure condition ...
	I1212 01:35:27.882426  701970 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1212 01:35:27.882461  701970 node_conditions.go:123] node cpu capacity is 2
	I1212 01:35:27.882474  701970 node_conditions.go:105] duration metric: took 3.671731ms to run NodePressure ...
	I1212 01:35:27.882487  701970 start.go:242] waiting for startup goroutines ...
	I1212 01:35:27.882494  701970 start.go:247] waiting for cluster config update ...
	I1212 01:35:27.882503  701970 start.go:256] writing updated cluster config ...
	I1212 01:35:27.882868  701970 ssh_runner.go:195] Run: rm -f paused
	I1212 01:35:27.888098  701970 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 01:35:27.888615  701970 kapi.go:59] client config for pause-249141: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22101-487723/.minikube/profiles/pause-249141/client.crt", KeyFile:"/home/jenkins/minikube-integration/22101-487723/.minikube/profiles/pause-249141/client.key", CAFile:"/home/jenkins/minikube-integration/22101-487723/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 01:35:27.892394  701970 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-5jwqj" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 01:35:23.836301  703668 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1212 01:35:23.836543  703668 start.go:159] libmachine.API.Create for "force-systemd-flag-272786" (driver="docker")
	I1212 01:35:23.836580  703668 client.go:173] LocalClient.Create starting
	I1212 01:35:23.836651  703668 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca.pem
	I1212 01:35:23.836693  703668 main.go:143] libmachine: Decoding PEM data...
	I1212 01:35:23.836713  703668 main.go:143] libmachine: Parsing certificate...
	I1212 01:35:23.836766  703668 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22101-487723/.minikube/certs/cert.pem
	I1212 01:35:23.836788  703668 main.go:143] libmachine: Decoding PEM data...
	I1212 01:35:23.836805  703668 main.go:143] libmachine: Parsing certificate...
	I1212 01:35:23.837183  703668 cli_runner.go:164] Run: docker network inspect force-systemd-flag-272786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1212 01:35:23.868951  703668 cli_runner.go:211] docker network inspect force-systemd-flag-272786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1212 01:35:23.869030  703668 network_create.go:284] running [docker network inspect force-systemd-flag-272786] to gather additional debugging logs...
	I1212 01:35:23.869057  703668 cli_runner.go:164] Run: docker network inspect force-systemd-flag-272786
	W1212 01:35:23.903092  703668 cli_runner.go:211] docker network inspect force-systemd-flag-272786 returned with exit code 1
	I1212 01:35:23.903126  703668 network_create.go:287] error running [docker network inspect force-systemd-flag-272786]: docker network inspect force-systemd-flag-272786: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-272786 not found
	I1212 01:35:23.903150  703668 network_create.go:289] output of [docker network inspect force-systemd-flag-272786]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-272786 not found
	
	** /stderr **
	I1212 01:35:23.903288  703668 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 01:35:23.930433  703668 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-987f53aa9676 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:c6:59:9a:7d:dd:1e} reservation:<nil>}
	I1212 01:35:23.930752  703668 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-3f096d49a95b IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:fa:06:56:75:08:cc} reservation:<nil>}
	I1212 01:35:23.931024  703668 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-0506280b338c IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:0e:9b:ca:19:ce:5d} reservation:<nil>}
	I1212 01:35:23.931310  703668 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-71b1cfc47b2b IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:b6:24:cd:de:f9:73} reservation:<nil>}
	I1212 01:35:23.931735  703668 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019d5c40}
	I1212 01:35:23.931760  703668 network_create.go:124] attempt to create docker network force-systemd-flag-272786 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1212 01:35:23.931823  703668 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-272786 force-systemd-flag-272786
	I1212 01:35:24.027366  703668 network_create.go:108] docker network force-systemd-flag-272786 192.168.85.0/24 created
	I1212 01:35:24.027406  703668 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-flag-272786" container
	I1212 01:35:24.027480  703668 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1212 01:35:24.062073  703668 cli_runner.go:164] Run: docker volume create force-systemd-flag-272786 --label name.minikube.sigs.k8s.io=force-systemd-flag-272786 --label created_by.minikube.sigs.k8s.io=true
	I1212 01:35:24.094272  703668 oci.go:103] Successfully created a docker volume force-systemd-flag-272786
	I1212 01:35:24.094374  703668 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-272786-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-272786 --entrypoint /usr/bin/test -v force-systemd-flag-272786:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1212 01:35:24.763043  703668 oci.go:107] Successfully prepared a docker volume force-systemd-flag-272786
	I1212 01:35:24.763105  703668 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 01:35:24.763114  703668 kic.go:194] Starting extracting preloaded images to volume ...
	I1212 01:35:24.763188  703668 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22101-487723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-272786:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	W1212 01:35:29.957261  701970 pod_ready.go:104] pod "coredns-66bc5c9577-5jwqj" is not "Ready", error: <nil>
	I1212 01:35:30.898086  701970 pod_ready.go:94] pod "coredns-66bc5c9577-5jwqj" is "Ready"
	I1212 01:35:30.898118  701970 pod_ready.go:86] duration metric: took 3.005701499s for pod "coredns-66bc5c9577-5jwqj" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 01:35:30.905086  701970 pod_ready.go:83] waiting for pod "etcd-pause-249141" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 01:35:30.909375  701970 pod_ready.go:94] pod "etcd-pause-249141" is "Ready"
	I1212 01:35:30.909405  701970 pod_ready.go:86] duration metric: took 4.289923ms for pod "etcd-pause-249141" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 01:35:30.911459  701970 pod_ready.go:83] waiting for pod "kube-apiserver-pause-249141" in "kube-system" namespace to be "Ready" or be gone ...
	W1212 01:35:32.916710  701970 pod_ready.go:104] pod "kube-apiserver-pause-249141" is not "Ready", error: <nil>
	I1212 01:35:28.988164  703668 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22101-487723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-272786:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (4.224940156s)
	I1212 01:35:28.988205  703668 kic.go:203] duration metric: took 4.225087147s to extract preloaded images to volume ...
	W1212 01:35:28.988359  703668 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1212 01:35:28.988468  703668 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1212 01:35:29.045491  703668 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-272786 --name force-systemd-flag-272786 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-272786 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-272786 --network force-systemd-flag-272786 --ip 192.168.85.2 --volume force-systemd-flag-272786:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1212 01:35:29.355225  703668 cli_runner.go:164] Run: docker container inspect force-systemd-flag-272786 --format={{.State.Running}}
	I1212 01:35:29.376194  703668 cli_runner.go:164] Run: docker container inspect force-systemd-flag-272786 --format={{.State.Status}}
	I1212 01:35:29.405354  703668 cli_runner.go:164] Run: docker exec force-systemd-flag-272786 stat /var/lib/dpkg/alternatives/iptables
	I1212 01:35:29.467675  703668 oci.go:144] the created container "force-systemd-flag-272786" has a running status.
	I1212 01:35:29.467713  703668 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22101-487723/.minikube/machines/force-systemd-flag-272786/id_rsa...
	I1212 01:35:29.734937  703668 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/machines/force-systemd-flag-272786/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1212 01:35:29.734989  703668 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22101-487723/.minikube/machines/force-systemd-flag-272786/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1212 01:35:29.770721  703668 cli_runner.go:164] Run: docker container inspect force-systemd-flag-272786 --format={{.State.Status}}
	I1212 01:35:29.798833  703668 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1212 01:35:29.798854  703668 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-272786 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1212 01:35:29.908743  703668 cli_runner.go:164] Run: docker container inspect force-systemd-flag-272786 --format={{.State.Status}}
	I1212 01:35:29.946651  703668 machine.go:94] provisionDockerMachine start ...
	I1212 01:35:29.946749  703668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-272786
	I1212 01:35:29.981782  703668 main.go:143] libmachine: Using SSH client type: native
	I1212 01:35:29.982133  703668 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33434 <nil> <nil>}
	I1212 01:35:29.982144  703668 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 01:35:29.982754  703668 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59984->127.0.0.1:33434: read: connection reset by peer
	I1212 01:35:33.134445  703668 main.go:143] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-272786
	
	I1212 01:35:33.134470  703668 ubuntu.go:182] provisioning hostname "force-systemd-flag-272786"
	I1212 01:35:33.134534  703668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-272786
	I1212 01:35:33.152340  703668 main.go:143] libmachine: Using SSH client type: native
	I1212 01:35:33.152652  703668 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33434 <nil> <nil>}
	I1212 01:35:33.152669  703668 main.go:143] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-272786 && echo "force-systemd-flag-272786" | sudo tee /etc/hostname
	I1212 01:35:33.320347  703668 main.go:143] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-272786
	
	I1212 01:35:33.320429  703668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-272786
	I1212 01:35:33.338183  703668 main.go:143] libmachine: Using SSH client type: native
	I1212 01:35:33.338489  703668 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33434 <nil> <nil>}
	I1212 01:35:33.338510  703668 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-flag-272786' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-272786/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-flag-272786' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 01:35:33.486926  703668 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 01:35:33.486963  703668 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22101-487723/.minikube CaCertPath:/home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22101-487723/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22101-487723/.minikube}
	I1212 01:35:33.487009  703668 ubuntu.go:190] setting up certificates
	I1212 01:35:33.487017  703668 provision.go:84] configureAuth start
	I1212 01:35:33.487086  703668 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-272786
	I1212 01:35:33.504886  703668 provision.go:143] copyHostCerts
	I1212 01:35:33.504935  703668 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22101-487723/.minikube/cert.pem
	I1212 01:35:33.504970  703668 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-487723/.minikube/cert.pem, removing ...
	I1212 01:35:33.504977  703668 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-487723/.minikube/cert.pem
	I1212 01:35:33.505057  703668 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22101-487723/.minikube/cert.pem (1123 bytes)
	I1212 01:35:33.505141  703668 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22101-487723/.minikube/key.pem
	I1212 01:35:33.505157  703668 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-487723/.minikube/key.pem, removing ...
	I1212 01:35:33.505162  703668 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-487723/.minikube/key.pem
	I1212 01:35:33.505187  703668 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22101-487723/.minikube/key.pem (1679 bytes)
	I1212 01:35:33.505235  703668 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22101-487723/.minikube/ca.pem
	I1212 01:35:33.505251  703668 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-487723/.minikube/ca.pem, removing ...
	I1212 01:35:33.505259  703668 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-487723/.minikube/ca.pem
	I1212 01:35:33.505282  703668 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22101-487723/.minikube/ca.pem (1078 bytes)
	I1212 01:35:33.505338  703668 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22101-487723/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-272786 san=[127.0.0.1 192.168.85.2 force-systemd-flag-272786 localhost minikube]
	I1212 01:35:33.680054  703668 provision.go:177] copyRemoteCerts
	I1212 01:35:33.680140  703668 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 01:35:33.680182  703668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-272786
	I1212 01:35:33.699306  703668 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33434 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/force-systemd-flag-272786/id_rsa Username:docker}
	I1212 01:35:33.806714  703668 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1212 01:35:33.806773  703668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1212 01:35:33.825001  703668 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1212 01:35:33.825095  703668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 01:35:33.843609  703668 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1212 01:35:33.843696  703668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 01:35:33.863654  703668 provision.go:87] duration metric: took 376.613968ms to configureAuth
	I1212 01:35:33.863682  703668 ubuntu.go:206] setting minikube options for container-runtime
	I1212 01:35:33.863914  703668 config.go:182] Loaded profile config "force-systemd-flag-272786": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 01:35:33.864022  703668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-272786
	I1212 01:35:33.881722  703668 main.go:143] libmachine: Using SSH client type: native
	I1212 01:35:33.882037  703668 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33434 <nil> <nil>}
	I1212 01:35:33.882051  703668 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 01:35:34.179565  703668 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 01:35:34.179589  703668 machine.go:97] duration metric: took 4.232914981s to provisionDockerMachine
	I1212 01:35:34.179601  703668 client.go:176] duration metric: took 10.343010597s to LocalClient.Create
	I1212 01:35:34.179615  703668 start.go:167] duration metric: took 10.343073169s to libmachine.API.Create "force-systemd-flag-272786"
	I1212 01:35:34.179627  703668 start.go:293] postStartSetup for "force-systemd-flag-272786" (driver="docker")
	I1212 01:35:34.179637  703668 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 01:35:34.179705  703668 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 01:35:34.179755  703668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-272786
	I1212 01:35:34.198354  703668 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33434 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/force-systemd-flag-272786/id_rsa Username:docker}
	I1212 01:35:34.302942  703668 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 01:35:34.306278  703668 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 01:35:34.306303  703668 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 01:35:34.306316  703668 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-487723/.minikube/addons for local assets ...
	I1212 01:35:34.306370  703668 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-487723/.minikube/files for local assets ...
	I1212 01:35:34.306449  703668 filesync.go:149] local asset: /home/jenkins/minikube-integration/22101-487723/.minikube/files/etc/ssl/certs/4909542.pem -> 4909542.pem in /etc/ssl/certs
	I1212 01:35:34.306456  703668 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/files/etc/ssl/certs/4909542.pem -> /etc/ssl/certs/4909542.pem
	I1212 01:35:34.306557  703668 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 01:35:34.314264  703668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/files/etc/ssl/certs/4909542.pem --> /etc/ssl/certs/4909542.pem (1708 bytes)
	I1212 01:35:34.332613  703668 start.go:296] duration metric: took 152.972381ms for postStartSetup
	I1212 01:35:34.333021  703668 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-272786
	I1212 01:35:34.349693  703668 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/force-systemd-flag-272786/config.json ...
	I1212 01:35:34.349986  703668 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 01:35:34.350040  703668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-272786
	I1212 01:35:34.367298  703668 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33434 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/force-systemd-flag-272786/id_rsa Username:docker}
	I1212 01:35:34.472017  703668 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 01:35:34.476574  703668 start.go:128] duration metric: took 10.643741758s to createHost
	I1212 01:35:34.476601  703668 start.go:83] releasing machines lock for "force-systemd-flag-272786", held for 10.643878944s
	I1212 01:35:34.476672  703668 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-272786
	I1212 01:35:34.494056  703668 ssh_runner.go:195] Run: cat /version.json
	I1212 01:35:34.494091  703668 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 01:35:34.494105  703668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-272786
	I1212 01:35:34.494144  703668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-272786
	I1212 01:35:34.519554  703668 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33434 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/force-systemd-flag-272786/id_rsa Username:docker}
	I1212 01:35:34.520741  703668 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33434 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/force-systemd-flag-272786/id_rsa Username:docker}
	I1212 01:35:34.622124  703668 ssh_runner.go:195] Run: systemctl --version
	I1212 01:35:34.714813  703668 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 01:35:34.755898  703668 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 01:35:34.760538  703668 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 01:35:34.760609  703668 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 01:35:34.789434  703668 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1212 01:35:34.789462  703668 start.go:496] detecting cgroup driver to use...
	I1212 01:35:34.789477  703668 start.go:500] using "systemd" cgroup driver as enforced via flags
	I1212 01:35:34.789896  703668 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 01:35:34.813518  703668 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 01:35:34.826763  703668 docker.go:218] disabling cri-docker service (if available) ...
	I1212 01:35:34.826823  703668 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 01:35:34.843750  703668 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 01:35:34.861767  703668 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 01:35:34.989827  703668 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 01:35:35.127609  703668 docker.go:234] disabling docker service ...
	I1212 01:35:35.127709  703668 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 01:35:35.149841  703668 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 01:35:35.164328  703668 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 01:35:35.299054  703668 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 01:35:35.442750  703668 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 01:35:35.456841  703668 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 01:35:35.472718  703668 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1212 01:35:35.472822  703668 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:35:35.482122  703668 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1212 01:35:35.482241  703668 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:35:35.492031  703668 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:35:35.501276  703668 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:35:35.510287  703668 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 01:35:35.518627  703668 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:35:35.527801  703668 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:35:35.541620  703668 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:35:35.550187  703668 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 01:35:35.558397  703668 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 01:35:35.565825  703668 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:35:35.683407  703668 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 01:35:35.855526  703668 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 01:35:35.855666  703668 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 01:35:35.860180  703668 start.go:564] Will wait 60s for crictl version
	I1212 01:35:35.860298  703668 ssh_runner.go:195] Run: which crictl
	I1212 01:35:35.864171  703668 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 01:35:35.888718  703668 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1212 01:35:35.888878  703668 ssh_runner.go:195] Run: crio --version
	I1212 01:35:35.921252  703668 ssh_runner.go:195] Run: crio --version
	I1212 01:35:35.956571  703668 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	W1212 01:35:34.919908  701970 pod_ready.go:104] pod "kube-apiserver-pause-249141" is not "Ready", error: <nil>
	I1212 01:35:36.922751  701970 pod_ready.go:94] pod "kube-apiserver-pause-249141" is "Ready"
	I1212 01:35:36.922783  701970 pod_ready.go:86] duration metric: took 6.011298613s for pod "kube-apiserver-pause-249141" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 01:35:36.927150  701970 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-249141" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 01:35:36.934059  701970 pod_ready.go:94] pod "kube-controller-manager-pause-249141" is "Ready"
	I1212 01:35:36.934083  701970 pod_ready.go:86] duration metric: took 6.899682ms for pod "kube-controller-manager-pause-249141" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 01:35:36.937607  701970 pod_ready.go:83] waiting for pod "kube-proxy-nlvxp" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 01:35:36.944732  701970 pod_ready.go:94] pod "kube-proxy-nlvxp" is "Ready"
	I1212 01:35:36.944806  701970 pod_ready.go:86] duration metric: took 7.177173ms for pod "kube-proxy-nlvxp" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 01:35:36.948381  701970 pod_ready.go:83] waiting for pod "kube-scheduler-pause-249141" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 01:35:37.116355  701970 pod_ready.go:94] pod "kube-scheduler-pause-249141" is "Ready"
	I1212 01:35:37.116407  701970 pod_ready.go:86] duration metric: took 167.949341ms for pod "kube-scheduler-pause-249141" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 01:35:37.116427  701970 pod_ready.go:40] duration metric: took 9.228286921s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 01:35:37.213922  701970 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1212 01:35:37.219452  701970 out.go:179] * Done! kubectl is now configured to use "pause-249141" cluster and "default" namespace by default
	I1212 01:35:35.959386  703668 cli_runner.go:164] Run: docker network inspect force-systemd-flag-272786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 01:35:35.974728  703668 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1212 01:35:35.978732  703668 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 01:35:35.988398  703668 kubeadm.go:884] updating cluster {Name:force-systemd-flag-272786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:force-systemd-flag-272786 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 01:35:35.988512  703668 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 01:35:35.988566  703668 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 01:35:36.025116  703668 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 01:35:36.025138  703668 crio.go:433] Images already preloaded, skipping extraction
	I1212 01:35:36.025204  703668 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 01:35:36.050850  703668 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 01:35:36.050870  703668 cache_images.go:86] Images are preloaded, skipping loading
	I1212 01:35:36.050877  703668 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.2 crio true true} ...
	I1212 01:35:36.050971  703668 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=force-systemd-flag-272786 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:force-systemd-flag-272786 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 01:35:36.051056  703668 ssh_runner.go:195] Run: crio config
	I1212 01:35:36.132675  703668 cni.go:84] Creating CNI manager for ""
	I1212 01:35:36.132699  703668 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 01:35:36.132714  703668 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 01:35:36.132757  703668 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-flag-272786 NodeName:force-systemd-flag-272786 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 01:35:36.132927  703668 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "force-systemd-flag-272786"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 01:35:36.133007  703668 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1212 01:35:36.141149  703668 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 01:35:36.141219  703668 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 01:35:36.149641  703668 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1212 01:35:36.162771  703668 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 01:35:36.181393  703668 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1212 01:35:36.195784  703668 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1212 01:35:36.204060  703668 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 01:35:36.214259  703668 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:35:36.332323  703668 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 01:35:36.349205  703668 certs.go:69] Setting up /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/force-systemd-flag-272786 for IP: 192.168.85.2
	I1212 01:35:36.349228  703668 certs.go:195] generating shared ca certs ...
	I1212 01:35:36.349244  703668 certs.go:227] acquiring lock for ca certs: {Name:mk856824cf2126fa3d2975ef18e195b6ab1234f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:35:36.349380  703668 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22101-487723/.minikube/ca.key
	I1212 01:35:36.349430  703668 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22101-487723/.minikube/proxy-client-ca.key
	I1212 01:35:36.349442  703668 certs.go:257] generating profile certs ...
	I1212 01:35:36.349498  703668 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/force-systemd-flag-272786/client.key
	I1212 01:35:36.349525  703668 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/force-systemd-flag-272786/client.crt with IP's: []
	I1212 01:35:36.502060  703668 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/force-systemd-flag-272786/client.crt ...
	I1212 01:35:36.502095  703668 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/force-systemd-flag-272786/client.crt: {Name:mk0a971d98a200fe256e3ba9ef91e5c3f2cf41f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:35:36.502298  703668 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/force-systemd-flag-272786/client.key ...
	I1212 01:35:36.502315  703668 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/force-systemd-flag-272786/client.key: {Name:mk88b66a5b32881eee1f97d047f50f89de357f85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:35:36.502415  703668 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/force-systemd-flag-272786/apiserver.key.8a6cec44
	I1212 01:35:36.502433  703668 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/force-systemd-flag-272786/apiserver.crt.8a6cec44 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1212 01:35:36.672507  703668 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/force-systemd-flag-272786/apiserver.crt.8a6cec44 ...
	I1212 01:35:36.672535  703668 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/force-systemd-flag-272786/apiserver.crt.8a6cec44: {Name:mk6e672e56b809191e2dadd9ac8d967bc71e3eaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:35:36.672715  703668 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/force-systemd-flag-272786/apiserver.key.8a6cec44 ...
	I1212 01:35:36.672728  703668 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/force-systemd-flag-272786/apiserver.key.8a6cec44: {Name:mkebe8e696a445ea7af787f561fbd60b7f81dbe3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:35:36.672818  703668 certs.go:382] copying /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/force-systemd-flag-272786/apiserver.crt.8a6cec44 -> /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/force-systemd-flag-272786/apiserver.crt
	I1212 01:35:36.672901  703668 certs.go:386] copying /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/force-systemd-flag-272786/apiserver.key.8a6cec44 -> /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/force-systemd-flag-272786/apiserver.key
	I1212 01:35:36.672966  703668 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/force-systemd-flag-272786/proxy-client.key
	I1212 01:35:36.672984  703668 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/force-systemd-flag-272786/proxy-client.crt with IP's: []
	I1212 01:35:36.913439  703668 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/force-systemd-flag-272786/proxy-client.crt ...
	I1212 01:35:36.913473  703668 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/force-systemd-flag-272786/proxy-client.crt: {Name:mk9e1c5a1656642b05fdb99fa8ff9a4562274554 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:35:36.913690  703668 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/force-systemd-flag-272786/proxy-client.key ...
	I1212 01:35:36.913716  703668 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/force-systemd-flag-272786/proxy-client.key: {Name:mk35c1fddb0c3101aab50410fe79a850daca3fd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:35:36.913816  703668 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 01:35:36.913840  703668 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1212 01:35:36.913853  703668 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 01:35:36.913879  703668 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 01:35:36.913895  703668 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/force-systemd-flag-272786/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1212 01:35:36.913939  703668 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/force-systemd-flag-272786/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1212 01:35:36.913957  703668 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/force-systemd-flag-272786/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1212 01:35:36.913989  703668 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/force-systemd-flag-272786/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1212 01:35:36.914044  703668 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/490954.pem (1338 bytes)
	W1212 01:35:36.914085  703668 certs.go:480] ignoring /home/jenkins/minikube-integration/22101-487723/.minikube/certs/490954_empty.pem, impossibly tiny 0 bytes
	I1212 01:35:36.914094  703668 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca-key.pem (1679 bytes)
	I1212 01:35:36.914126  703668 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca.pem (1078 bytes)
	I1212 01:35:36.914159  703668 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/cert.pem (1123 bytes)
	I1212 01:35:36.914190  703668 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/key.pem (1679 bytes)
	I1212 01:35:36.914236  703668 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-487723/.minikube/files/etc/ssl/certs/4909542.pem (1708 bytes)
	I1212 01:35:36.914275  703668 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:35:36.914298  703668 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/490954.pem -> /usr/share/ca-certificates/490954.pem
	I1212 01:35:36.914315  703668 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/files/etc/ssl/certs/4909542.pem -> /usr/share/ca-certificates/4909542.pem
	I1212 01:35:36.914896  703668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 01:35:36.936710  703668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 01:35:36.957620  703668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 01:35:36.976804  703668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 01:35:36.997173  703668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/force-systemd-flag-272786/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1212 01:35:37.019228  703668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/force-systemd-flag-272786/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 01:35:37.040489  703668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/force-systemd-flag-272786/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 01:35:37.058510  703668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/force-systemd-flag-272786/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 01:35:37.080237  703668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 01:35:37.098455  703668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/certs/490954.pem --> /usr/share/ca-certificates/490954.pem (1338 bytes)
	I1212 01:35:37.117307  703668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/files/etc/ssl/certs/4909542.pem --> /usr/share/ca-certificates/4909542.pem (1708 bytes)
	I1212 01:35:37.136488  703668 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 01:35:37.149481  703668 ssh_runner.go:195] Run: openssl version
	I1212 01:35:37.156509  703668 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/490954.pem
	I1212 01:35:37.164673  703668 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/490954.pem /etc/ssl/certs/490954.pem
	I1212 01:35:37.183640  703668 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/490954.pem
	I1212 01:35:37.188610  703668 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 00:21 /usr/share/ca-certificates/490954.pem
	I1212 01:35:37.188737  703668 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/490954.pem
	I1212 01:35:37.241458  703668 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 01:35:37.276611  703668 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/490954.pem /etc/ssl/certs/51391683.0
	I1212 01:35:37.293724  703668 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4909542.pem
	I1212 01:35:37.305636  703668 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4909542.pem /etc/ssl/certs/4909542.pem
	I1212 01:35:37.331874  703668 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4909542.pem
	I1212 01:35:37.340244  703668 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 00:21 /usr/share/ca-certificates/4909542.pem
	I1212 01:35:37.340319  703668 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4909542.pem
	I1212 01:35:37.416295  703668 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 01:35:37.427292  703668 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4909542.pem /etc/ssl/certs/3ec20f2e.0
	I1212 01:35:37.439024  703668 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:35:37.455447  703668 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 01:35:37.464724  703668 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:35:37.471953  703668 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 00:11 /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:35:37.472022  703668 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:35:37.519179  703668 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 01:35:37.530460  703668 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1212 01:35:37.538128  703668 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 01:35:37.543493  703668 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1212 01:35:37.543545  703668 kubeadm.go:401] StartCluster: {Name:force-systemd-flag-272786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:force-systemd-flag-272786 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 01:35:37.543618  703668 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 01:35:37.543678  703668 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 01:35:37.574014  703668 cri.go:89] found id: ""
	I1212 01:35:37.574079  703668 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 01:35:37.583959  703668 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 01:35:37.595745  703668 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 01:35:37.595806  703668 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 01:35:37.606671  703668 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 01:35:37.606720  703668 kubeadm.go:158] found existing configuration files:
	
	I1212 01:35:37.606768  703668 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 01:35:37.620678  703668 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 01:35:37.620738  703668 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 01:35:37.630328  703668 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 01:35:37.641154  703668 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 01:35:37.641267  703668 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 01:35:37.651303  703668 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 01:35:37.661701  703668 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 01:35:37.661806  703668 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 01:35:37.671854  703668 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 01:35:37.684040  703668 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 01:35:37.684136  703668 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 01:35:37.700397  703668 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 01:35:37.761723  703668 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1212 01:35:37.762126  703668 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 01:35:37.807149  703668 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 01:35:37.807241  703668 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1212 01:35:37.807290  703668 kubeadm.go:319] OS: Linux
	I1212 01:35:37.807355  703668 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 01:35:37.807417  703668 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 01:35:37.807468  703668 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 01:35:37.807532  703668 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 01:35:37.807614  703668 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 01:35:37.807681  703668 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 01:35:37.807740  703668 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 01:35:37.807800  703668 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 01:35:37.807862  703668 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 01:35:37.894776  703668 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 01:35:37.894892  703668 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 01:35:37.894997  703668 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 01:35:37.911117  703668 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 01:35:37.916385  703668 out.go:252]   - Generating certificates and keys ...
	I1212 01:35:37.916505  703668 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 01:35:37.916582  703668 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 01:35:38.377878  703668 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	
	
	==> CRI-O <==
	Dec 12 01:35:18 pause-249141 crio[2078]: time="2025-12-12T01:35:18.910158768Z" level=info msg="Started container" PID=2368 containerID=06ed94bd38e372eff5b039f17f5b555d55348a17a2f305a892bbefaccb2dff7d description=kube-system/etcd-pause-249141/etcd id=54e8ec35-57fe-4cf9-99c5-579c4c6cb3e5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=dbb2472c051010ed79b38e136567db0a0337b52feb4d26e960eef74b961ae160
	Dec 12 01:35:18 pause-249141 crio[2078]: time="2025-12-12T01:35:18.978911482Z" level=info msg="Created container 5b492324a19d7368866c313237fbae39b56ac6a28750bbb96d9a4aab8764b945: kube-system/kindnet-5s7pz/kindnet-cni" id=a6406c11-84c7-4fd8-8410-62deb71c95cd name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 01:35:18 pause-249141 crio[2078]: time="2025-12-12T01:35:18.97918007Z" level=info msg="Created container 550a496e315f16d7f7d774db26779f759f5db45901e0f542db49dd51ea30338e: kube-system/kube-scheduler-pause-249141/kube-scheduler" id=1c6efe33-aaaf-4dfd-b000-c72d3caada6a name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 01:35:18 pause-249141 crio[2078]: time="2025-12-12T01:35:18.980203873Z" level=info msg="Starting container: 5b492324a19d7368866c313237fbae39b56ac6a28750bbb96d9a4aab8764b945" id=3a7d4cde-8af2-4582-bb46-21b88008bb86 name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 01:35:18 pause-249141 crio[2078]: time="2025-12-12T01:35:18.983496545Z" level=info msg="Created container 86eb237121dbb40df2f9508434a7b2d0bb0bbb9a812e6b01a538de3d0c1b435a: kube-system/kube-apiserver-pause-249141/kube-apiserver" id=c56f590c-4860-49f7-bd04-098d09919844 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 01:35:18 pause-249141 crio[2078]: time="2025-12-12T01:35:18.983719153Z" level=info msg="Starting container: 550a496e315f16d7f7d774db26779f759f5db45901e0f542db49dd51ea30338e" id=ae2e4cd6-091b-4cae-9c97-b293452b098d name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 01:35:18 pause-249141 crio[2078]: time="2025-12-12T01:35:18.987603876Z" level=info msg="Started container" PID=2398 containerID=5b492324a19d7368866c313237fbae39b56ac6a28750bbb96d9a4aab8764b945 description=kube-system/kindnet-5s7pz/kindnet-cni id=3a7d4cde-8af2-4582-bb46-21b88008bb86 name=/runtime.v1.RuntimeService/StartContainer sandboxID=fdd6456bfcddc8e48cbe3d3163e3e454c7db94d343a0a66481ae6eaebc2b963d
	Dec 12 01:35:18 pause-249141 crio[2078]: time="2025-12-12T01:35:18.989417264Z" level=info msg="Started container" PID=2385 containerID=550a496e315f16d7f7d774db26779f759f5db45901e0f542db49dd51ea30338e description=kube-system/kube-scheduler-pause-249141/kube-scheduler id=ae2e4cd6-091b-4cae-9c97-b293452b098d name=/runtime.v1.RuntimeService/StartContainer sandboxID=8fd836479bfbd7367ad9bd4ffb6c149e50f9f4f5c45f641d678e88874f3f307f
	Dec 12 01:35:19 pause-249141 crio[2078]: time="2025-12-12T01:35:19.002973975Z" level=info msg="Starting container: 86eb237121dbb40df2f9508434a7b2d0bb0bbb9a812e6b01a538de3d0c1b435a" id=17018a8f-eabd-4e5a-81e2-5e1186ea90d4 name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 01:35:19 pause-249141 crio[2078]: time="2025-12-12T01:35:19.034213156Z" level=info msg="Created container 4e079fbf56172a4e7de82ff26b90282dce34661a3ad4a9b1340b61c9cbb4d555: kube-system/kube-proxy-nlvxp/kube-proxy" id=cea791f6-3d99-47a0-8548-16edc8b47c99 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 01:35:19 pause-249141 crio[2078]: time="2025-12-12T01:35:19.038415918Z" level=info msg="Starting container: 4e079fbf56172a4e7de82ff26b90282dce34661a3ad4a9b1340b61c9cbb4d555" id=1b4a59bf-d0f9-4960-83d1-dc45b2dd4e46 name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 01:35:19 pause-249141 crio[2078]: time="2025-12-12T01:35:19.06290652Z" level=info msg="Started container" PID=2390 containerID=86eb237121dbb40df2f9508434a7b2d0bb0bbb9a812e6b01a538de3d0c1b435a description=kube-system/kube-apiserver-pause-249141/kube-apiserver id=17018a8f-eabd-4e5a-81e2-5e1186ea90d4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=86917fb33990b23b6b78e6cc6505a1a5c8f7563a74c4dd0715de2839f1bdb022
	Dec 12 01:35:19 pause-249141 crio[2078]: time="2025-12-12T01:35:19.075748975Z" level=info msg="Started container" PID=2361 containerID=4e079fbf56172a4e7de82ff26b90282dce34661a3ad4a9b1340b61c9cbb4d555 description=kube-system/kube-proxy-nlvxp/kube-proxy id=1b4a59bf-d0f9-4960-83d1-dc45b2dd4e46 name=/runtime.v1.RuntimeService/StartContainer sandboxID=192d1366817c4fb7dee05366d5c73a517c0cb58884369176dfe93d84002b6c5b
	Dec 12 01:35:29 pause-249141 crio[2078]: time="2025-12-12T01:35:29.428827219Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 12 01:35:29 pause-249141 crio[2078]: time="2025-12-12T01:35:29.445502592Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 12 01:35:29 pause-249141 crio[2078]: time="2025-12-12T01:35:29.453416108Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 12 01:35:29 pause-249141 crio[2078]: time="2025-12-12T01:35:29.453544892Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 12 01:35:29 pause-249141 crio[2078]: time="2025-12-12T01:35:29.462249946Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 12 01:35:29 pause-249141 crio[2078]: time="2025-12-12T01:35:29.462439382Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 12 01:35:29 pause-249141 crio[2078]: time="2025-12-12T01:35:29.462533443Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 12 01:35:29 pause-249141 crio[2078]: time="2025-12-12T01:35:29.475967097Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 12 01:35:29 pause-249141 crio[2078]: time="2025-12-12T01:35:29.476162013Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 12 01:35:29 pause-249141 crio[2078]: time="2025-12-12T01:35:29.476244506Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 12 01:35:29 pause-249141 crio[2078]: time="2025-12-12T01:35:29.493818088Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 12 01:35:29 pause-249141 crio[2078]: time="2025-12-12T01:35:29.493853279Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	5b492324a19d7       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   21 seconds ago       Running             kindnet-cni               1                   fdd6456bfcddc       kindnet-5s7pz                          kube-system
	86eb237121dbb       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7   21 seconds ago       Running             kube-apiserver            1                   86917fb33990b       kube-apiserver-pause-249141            kube-system
	550a496e315f1       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949   21 seconds ago       Running             kube-scheduler            1                   8fd836479bfbd       kube-scheduler-pause-249141            kube-system
	06ed94bd38e37       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42   21 seconds ago       Running             etcd                      1                   dbb2472c05101       etcd-pause-249141                      kube-system
	baa32678c2651       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   21 seconds ago       Running             kube-controller-manager   1                   09ad9d32edab6       kube-controller-manager-pause-249141   kube-system
	e1c1abd5f32e6       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   21 seconds ago       Running             coredns                   1                   3c59d8033f714       coredns-66bc5c9577-5jwqj               kube-system
	4e079fbf56172       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786   21 seconds ago       Running             kube-proxy                1                   192d1366817c4       kube-proxy-nlvxp                       kube-system
	c771bc06bce8f       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   34 seconds ago       Exited              coredns                   0                   3c59d8033f714       coredns-66bc5c9577-5jwqj               kube-system
	45aec66ba1df1       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   fdd6456bfcddc       kindnet-5s7pz                          kube-system
	e704c7281fcb7       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786   About a minute ago   Exited              kube-proxy                0                   192d1366817c4       kube-proxy-nlvxp                       kube-system
	595d301c32a6e       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   About a minute ago   Exited              kube-controller-manager   0                   09ad9d32edab6       kube-controller-manager-pause-249141   kube-system
	adb8f76bff1a1       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42   About a minute ago   Exited              etcd                      0                   dbb2472c05101       etcd-pause-249141                      kube-system
	93423cf437a5d       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7   About a minute ago   Exited              kube-apiserver            0                   86917fb33990b       kube-apiserver-pause-249141            kube-system
	3ec4d221661b7       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949   About a minute ago   Exited              kube-scheduler            0                   8fd836479bfbd       kube-scheduler-pause-249141            kube-system
	
	
	==> coredns [c771bc06bce8f82dca79bdb4ca387282bc81275d57a564d32a9a0697a87b2d17] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:50648 - 5096 "HINFO IN 2028248822654866337.4104538514851914123. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021771971s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [e1c1abd5f32e6476b2cf029011d2249f974696e5db5ed40ae5e5686e0bd54717] <==
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45260 - 42576 "HINFO IN 6515863558079566634.7371817655653771039. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.026118041s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "namespaces" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               pause-249141
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-249141
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c04ca15b4c226075dd018d362cd996ac712bf2c0
	                    minikube.k8s.io/name=pause-249141
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_12T01_34_20_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 12 Dec 2025 01:34:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-249141
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 12 Dec 2025 01:35:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 12 Dec 2025 01:35:05 +0000   Fri, 12 Dec 2025 01:34:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 12 Dec 2025 01:35:05 +0000   Fri, 12 Dec 2025 01:34:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 12 Dec 2025 01:35:05 +0000   Fri, 12 Dec 2025 01:34:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 12 Dec 2025 01:35:05 +0000   Fri, 12 Dec 2025 01:35:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-249141
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 78f85184c267cd52312ad0096937f858
	  System UUID:                a5f80442-b367-47c0-9534-e87912c2f944
	  Boot ID:                    cbbb78f6-c2df-4b23-9269-8d5d442bffaa
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-5jwqj                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     76s
	  kube-system                 etcd-pause-249141                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         81s
	  kube-system                 kindnet-5s7pz                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      76s
	  kube-system                 kube-apiserver-pause-249141             250m (12%)    0 (0%)      0 (0%)           0 (0%)         83s
	  kube-system                 kube-controller-manager-pause-249141    200m (10%)    0 (0%)      0 (0%)           0 (0%)         81s
	  kube-system                 kube-proxy-nlvxp                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         76s
	  kube-system                 kube-scheduler-pause-249141             100m (5%)     0 (0%)      0 (0%)           0 (0%)         81s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 75s                kube-proxy       
	  Normal   Starting                 12s                kube-proxy       
	  Warning  CgroupV1                 88s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  88s (x8 over 88s)  kubelet          Node pause-249141 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    88s (x8 over 88s)  kubelet          Node pause-249141 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     88s (x8 over 88s)  kubelet          Node pause-249141 status is now: NodeHasSufficientPID
	  Normal   Starting                 81s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 81s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  81s                kubelet          Node pause-249141 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    81s                kubelet          Node pause-249141 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     81s                kubelet          Node pause-249141 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           77s                node-controller  Node pause-249141 event: Registered Node pause-249141 in Controller
	  Normal   NodeReady                35s                kubelet          Node pause-249141 status is now: NodeReady
	  Normal   RegisteredNode           11s                node-controller  Node pause-249141 event: Registered Node pause-249141 in Controller
	
	
	==> dmesg <==
	[  +3.673413] overlayfs: idmapped layers are currently not supported
	[ +34.404177] overlayfs: idmapped layers are currently not supported
	[Dec12 00:59] overlayfs: idmapped layers are currently not supported
	[Dec12 01:00] overlayfs: idmapped layers are currently not supported
	[  +2.854463] overlayfs: idmapped layers are currently not supported
	[Dec12 01:01] overlayfs: idmapped layers are currently not supported
	[Dec12 01:02] overlayfs: idmapped layers are currently not supported
	[Dec12 01:03] overlayfs: idmapped layers are currently not supported
	[Dec12 01:08] overlayfs: idmapped layers are currently not supported
	[ +34.061772] overlayfs: idmapped layers are currently not supported
	[Dec12 01:09] overlayfs: idmapped layers are currently not supported
	[Dec12 01:11] overlayfs: idmapped layers are currently not supported
	[Dec12 01:12] overlayfs: idmapped layers are currently not supported
	[Dec12 01:13] overlayfs: idmapped layers are currently not supported
	[Dec12 01:14] overlayfs: idmapped layers are currently not supported
	[  +1.592007] overlayfs: idmapped layers are currently not supported
	[Dec12 01:15] overlayfs: idmapped layers are currently not supported
	[ +24.197582] overlayfs: idmapped layers are currently not supported
	[Dec12 01:16] overlayfs: idmapped layers are currently not supported
	[ +26.194679] overlayfs: idmapped layers are currently not supported
	[Dec12 01:17] overlayfs: idmapped layers are currently not supported
	[Dec12 01:18] overlayfs: idmapped layers are currently not supported
	[Dec12 01:21] overlayfs: idmapped layers are currently not supported
	[Dec12 01:22] overlayfs: idmapped layers are currently not supported
	[Dec12 01:34] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [06ed94bd38e372eff5b039f17f5b555d55348a17a2f305a892bbefaccb2dff7d] <==
	{"level":"warn","ts":"2025-12-12T01:35:24.048494Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:35:24.077688Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:35:24.114503Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:35:24.154271Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:35:24.166616Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:35:24.192192Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:35:24.234142Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:35:24.267268Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:35:24.299642Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:35:24.333060Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:35:24.351849Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:35:24.394028Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:35:24.426549Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37434","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:35:24.472428Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37456","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:35:24.509367Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:35:24.524180Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:35:24.544471Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:35:24.579876Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:35:24.609982Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:35:24.635540Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:35:24.730916Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:35:24.747405Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:35:24.771473Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:35:24.792836Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:35:24.910799Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37648","server-name":"","error":"EOF"}
	
	
	==> etcd [adb8f76bff1a17e46695b9272558293767fdfe57f417ac8f86c718a1507b74ce] <==
	{"level":"warn","ts":"2025-12-12T01:34:15.413295Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:34:15.446601Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:34:15.514133Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:34:15.547551Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:34:15.568272Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:34:15.606208Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:34:15.688688Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33132","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-12T01:35:09.936817Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-12T01:35:09.936885Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-249141","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	{"level":"error","ts":"2025-12-12T01:35:09.936993Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-12T01:35:10.095499Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-12T01:35:10.095604Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-12T01:35:10.095625Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2025-12-12T01:35:10.095685Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-12-12T01:35:10.095742Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-12T01:35:10.095780Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-12T01:35:10.095791Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-12T01:35:10.095810Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-12-12T01:35:10.095889Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-12T01:35:10.095939Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-12T01:35:10.095974Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-12T01:35:10.099397Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"error","ts":"2025-12-12T01:35:10.099564Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-12T01:35:10.099615Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-12T01:35:10.099643Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-249141","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	
	==> kernel <==
	 01:35:41 up  4:18,  0 user,  load average: 2.37, 1.47, 1.59
	Linux pause-249141 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [45aec66ba1df15580fb0dc430c9ca8833004e5cfc642d8cb7fbac231cbdf9574] <==
	I1212 01:34:25.312967       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1212 01:34:25.313240       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1212 01:34:25.313417       1 main.go:148] setting mtu 1500 for CNI 
	I1212 01:34:25.313439       1 main.go:178] kindnetd IP family: "ipv4"
	I1212 01:34:25.313453       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-12T01:34:25Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1212 01:34:25.526220       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1212 01:34:25.526246       1 controller.go:381] "Waiting for informer caches to sync"
	I1212 01:34:25.526254       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1212 01:34:25.527071       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1212 01:34:55.527250       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1212 01:34:55.527264       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1212 01:34:55.527364       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1212 01:34:55.527434       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1212 01:34:57.026912       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1212 01:34:57.026942       1 metrics.go:72] Registering metrics
	I1212 01:34:57.027061       1 controller.go:711] "Syncing nftables rules"
	I1212 01:35:05.531673       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1212 01:35:05.531733       1 main.go:301] handling current node
	
	
	==> kindnet [5b492324a19d7368866c313237fbae39b56ac6a28750bbb96d9a4aab8764b945] <==
	I1212 01:35:19.239236       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1212 01:35:19.243199       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1212 01:35:19.244159       1 main.go:148] setting mtu 1500 for CNI 
	I1212 01:35:19.249108       1 main.go:178] kindnetd IP family: "ipv4"
	I1212 01:35:19.249175       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-12T01:35:19Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1212 01:35:19.430511       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1212 01:35:19.430552       1 controller.go:381] "Waiting for informer caches to sync"
	I1212 01:35:19.430566       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1212 01:35:19.431462       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1212 01:35:26.530724       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1212 01:35:26.530760       1 metrics.go:72] Registering metrics
	I1212 01:35:26.530891       1 controller.go:711] "Syncing nftables rules"
	I1212 01:35:29.425388       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1212 01:35:29.425429       1 main.go:301] handling current node
	I1212 01:35:39.422745       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1212 01:35:39.422808       1 main.go:301] handling current node
	
	
	==> kube-apiserver [86eb237121dbb40df2f9508434a7b2d0bb0bbb9a812e6b01a538de3d0c1b435a] <==
	I1212 01:35:26.401982       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1212 01:35:26.402651       1 cache.go:39] Caches are synced for autoregister controller
	I1212 01:35:26.408658       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1212 01:35:26.439127       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1212 01:35:26.439553       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1212 01:35:26.439688       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1212 01:35:26.439748       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1212 01:35:26.447430       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1212 01:35:26.447948       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1212 01:35:26.458074       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1212 01:35:26.459139       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1212 01:35:26.459237       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1212 01:35:26.459352       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1212 01:35:26.459396       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1212 01:35:26.459419       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1212 01:35:26.461057       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1212 01:35:26.461114       1 policy_source.go:240] refreshing policies
	E1212 01:35:26.461353       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1212 01:35:26.497047       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1212 01:35:26.776464       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1212 01:35:28.313286       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1212 01:35:29.929430       1 controller.go:667] quota admission added evaluator for: endpoints
	I1212 01:35:29.973861       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1212 01:35:30.008940       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1212 01:35:30.106296       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-apiserver [93423cf437a5df66043e9d8c67f967f73a3aba50c58c120f3210fc19fee0e72a] <==
	W1212 01:35:09.962571       1 logging.go:55] [core] [Channel #9 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:35:09.962612       1 logging.go:55] [core] [Channel #219 SubChannel #221]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:35:09.962652       1 logging.go:55] [core] [Channel #211 SubChannel #213]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:35:09.964158       1 logging.go:55] [core] [Channel #67 SubChannel #69]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:35:09.964233       1 logging.go:55] [core] [Channel #71 SubChannel #73]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:35:09.964282       1 logging.go:55] [core] [Channel #83 SubChannel #85]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:35:09.964332       1 logging.go:55] [core] [Channel #183 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:35:09.964378       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:35:09.964431       1 logging.go:55] [core] [Channel #235 SubChannel #237]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:35:09.964479       1 logging.go:55] [core] [Channel #39 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:35:09.964542       1 logging.go:55] [core] [Channel #115 SubChannel #117]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:35:09.964596       1 logging.go:55] [core] [Channel #151 SubChannel #153]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:35:09.964645       1 logging.go:55] [core] [Channel #159 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:35:09.964693       1 logging.go:55] [core] [Channel #191 SubChannel #193]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:35:09.964746       1 logging.go:55] [core] [Channel #59 SubChannel #61]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:35:09.964820       1 logging.go:55] [core] [Channel #55 SubChannel #57]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:35:09.964859       1 logging.go:55] [core] [Channel #215 SubChannel #217]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:35:09.964895       1 logging.go:55] [core] [Channel #31 SubChannel #33]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:35:09.964933       1 logging.go:55] [core] [Channel #75 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:35:09.964967       1 logging.go:55] [core] [Channel #111 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:35:09.965003       1 logging.go:55] [core] [Channel #119 SubChannel #121]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:35:09.965047       1 logging.go:55] [core] [Channel #131 SubChannel #133]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:35:09.965128       1 logging.go:55] [core] [Channel #79 SubChannel #81]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:35:09.965199       1 logging.go:55] [core] [Channel #43 SubChannel #45]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [595d301c32a6e5bcd377b0b53fca8b556fd8a0a9887c54cb4dbc74fae9bc5012] <==
	I1212 01:34:23.622252       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1212 01:34:23.622662       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-249141"
	I1212 01:34:23.622794       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1212 01:34:23.623144       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1212 01:34:23.623279       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1212 01:34:23.624384       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1212 01:34:23.624478       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1212 01:34:23.624982       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1212 01:34:23.624984       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1212 01:34:23.626875       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1212 01:34:23.627451       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1212 01:34:23.627521       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1212 01:34:23.627533       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1212 01:34:23.631188       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1212 01:34:23.633516       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1212 01:34:23.633606       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1212 01:34:23.633632       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1212 01:34:23.633642       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1212 01:34:23.633648       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1212 01:34:23.642892       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1212 01:34:23.642920       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1212 01:34:23.642928       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1212 01:34:23.644355       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1212 01:34:23.644925       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-249141" podCIDRs=["10.244.0.0/24"]
	I1212 01:35:08.630379       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-controller-manager [baa32678c2651eca75a4f6b1c2685393cdffab03f5c5961c36b3fc1b1a93db68] <==
	I1212 01:35:29.898603       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1212 01:35:29.899426       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1212 01:35:29.893883       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1212 01:35:29.894351       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1212 01:35:29.894370       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1212 01:35:29.894636       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1212 01:35:29.900266       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1212 01:35:29.902793       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1212 01:35:29.903724       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1212 01:35:29.915661       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1212 01:35:29.923534       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1212 01:35:29.936418       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1212 01:35:29.944676       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1212 01:35:29.944792       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1212 01:35:29.944825       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1212 01:35:29.944780       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1212 01:35:29.947262       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1212 01:35:29.952497       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1212 01:35:29.952662       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1212 01:35:29.952765       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-249141"
	I1212 01:35:29.952855       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1212 01:35:29.953310       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1212 01:35:29.954878       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1212 01:35:29.967312       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1212 01:35:29.974445       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-proxy [4e079fbf56172a4e7de82ff26b90282dce34661a3ad4a9b1340b61c9cbb4d555] <==
	I1212 01:35:20.343706       1 server_linux.go:53] "Using iptables proxy"
	I1212 01:35:21.809076       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E1212 01:35:26.424162       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes \"pause-249141\" is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1212 01:35:27.911018       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1212 01:35:27.911052       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1212 01:35:27.911124       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1212 01:35:27.932136       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1212 01:35:27.932200       1 server_linux.go:132] "Using iptables Proxier"
	I1212 01:35:27.936614       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1212 01:35:27.936986       1 server.go:527] "Version info" version="v1.34.2"
	I1212 01:35:27.937010       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 01:35:27.938127       1 config.go:200] "Starting service config controller"
	I1212 01:35:27.938155       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1212 01:35:27.938922       1 config.go:106] "Starting endpoint slice config controller"
	I1212 01:35:27.939014       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1212 01:35:27.939083       1 config.go:403] "Starting serviceCIDR config controller"
	I1212 01:35:27.939117       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1212 01:35:27.941589       1 config.go:309] "Starting node config controller"
	I1212 01:35:27.941668       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1212 01:35:27.941698       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1212 01:35:28.039233       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1212 01:35:28.039239       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1212 01:35:28.039256       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [e704c7281fcb74b9686f6baa06351c26b69648ee741e499cac05da9256535e0a] <==
	I1212 01:34:25.253611       1 server_linux.go:53] "Using iptables proxy"
	I1212 01:34:25.340246       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1212 01:34:25.440752       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1212 01:34:25.441811       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1212 01:34:25.441937       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1212 01:34:25.459705       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1212 01:34:25.459754       1 server_linux.go:132] "Using iptables Proxier"
	I1212 01:34:25.465223       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1212 01:34:25.465534       1 server.go:527] "Version info" version="v1.34.2"
	I1212 01:34:25.465557       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 01:34:25.472360       1 config.go:106] "Starting endpoint slice config controller"
	I1212 01:34:25.472387       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1212 01:34:25.473219       1 config.go:200] "Starting service config controller"
	I1212 01:34:25.473237       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1212 01:34:25.473490       1 config.go:403] "Starting serviceCIDR config controller"
	I1212 01:34:25.473504       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1212 01:34:25.474546       1 config.go:309] "Starting node config controller"
	I1212 01:34:25.474563       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1212 01:34:25.474569       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1212 01:34:25.572801       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1212 01:34:25.573850       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1212 01:34:25.573863       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [3ec4d221661b7f77facbb9727ed33f4f819f2dacde148a40f053f0a805768ac9] <==
	E1212 01:34:16.952758       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1212 01:34:16.952803       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1212 01:34:16.952862       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1212 01:34:16.952927       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1212 01:34:16.952983       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1212 01:34:16.953104       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1212 01:34:16.953155       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1212 01:34:16.953210       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1212 01:34:16.953258       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1212 01:34:16.953309       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1212 01:34:16.953354       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1212 01:34:16.953404       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1212 01:34:16.953460       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1212 01:34:16.953533       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1212 01:34:16.953700       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1212 01:34:16.957973       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1212 01:34:17.842090       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1212 01:34:17.969300       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	I1212 01:34:18.434971       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1212 01:35:09.931786       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1212 01:35:09.931820       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1212 01:35:09.931847       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1212 01:35:09.931877       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1212 01:35:09.931916       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1212 01:35:09.931944       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [550a496e315f16d7f7d774db26779f759f5db45901e0f542db49dd51ea30338e] <==
	I1212 01:35:26.290792       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 01:35:26.318161       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1212 01:35:26.318845       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1212 01:35:26.318938       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1212 01:35:26.361124       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1212 01:35:26.376909       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1212 01:35:26.401745       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1212 01:35:26.401883       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1212 01:35:26.403071       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1212 01:35:26.403228       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1212 01:35:26.403386       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1212 01:35:26.403485       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1212 01:35:26.403544       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1212 01:35:26.407221       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1212 01:35:26.407302       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1212 01:35:26.407368       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1212 01:35:26.407431       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1212 01:35:26.407498       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1212 01:35:26.407560       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1212 01:35:26.407663       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1212 01:35:26.407785       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1212 01:35:26.407852       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1212 01:35:26.407906       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1212 01:35:26.407944       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	I1212 01:35:27.363224       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 12 01:35:18 pause-249141 kubelet[1318]: E1212 01:35:18.724909    1318 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-5jwqj\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="1a666390-6ced-45ed-9bce-50a39727a9f8" pod="kube-system/coredns-66bc5c9577-5jwqj"
	Dec 12 01:35:18 pause-249141 kubelet[1318]: E1212 01:35:18.725059    1318 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-249141\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="6b319ff4ba53ab7887d2c3541575ccb4" pod="kube-system/etcd-pause-249141"
	Dec 12 01:35:18 pause-249141 kubelet[1318]: E1212 01:35:18.725203    1318 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-249141\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="4d7163457b2f692953f8cb8b04ad4f04" pod="kube-system/kube-scheduler-pause-249141"
	Dec 12 01:35:18 pause-249141 kubelet[1318]: E1212 01:35:18.725345    1318 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-249141\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="a4ce20392fdce9bbaaeba656027321a0" pod="kube-system/kube-apiserver-pause-249141"
	Dec 12 01:35:18 pause-249141 kubelet[1318]: E1212 01:35:18.725480    1318 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-249141\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="bbb4bfc784ac6e8df37d02e51f111018" pod="kube-system/kube-controller-manager-pause-249141"
	Dec 12 01:35:18 pause-249141 kubelet[1318]: I1212 01:35:18.728377    1318 scope.go:117] "RemoveContainer" containerID="45aec66ba1df15580fb0dc430c9ca8833004e5cfc642d8cb7fbac231cbdf9574"
	Dec 12 01:35:18 pause-249141 kubelet[1318]: E1212 01:35:18.729005    1318 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-5jwqj\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="1a666390-6ced-45ed-9bce-50a39727a9f8" pod="kube-system/coredns-66bc5c9577-5jwqj"
	Dec 12 01:35:18 pause-249141 kubelet[1318]: E1212 01:35:18.729330    1318 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-249141\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="6b319ff4ba53ab7887d2c3541575ccb4" pod="kube-system/etcd-pause-249141"
	Dec 12 01:35:18 pause-249141 kubelet[1318]: E1212 01:35:18.729623    1318 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-249141\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="4d7163457b2f692953f8cb8b04ad4f04" pod="kube-system/kube-scheduler-pause-249141"
	Dec 12 01:35:18 pause-249141 kubelet[1318]: E1212 01:35:18.729884    1318 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-249141\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="a4ce20392fdce9bbaaeba656027321a0" pod="kube-system/kube-apiserver-pause-249141"
	Dec 12 01:35:18 pause-249141 kubelet[1318]: E1212 01:35:18.730148    1318 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-249141\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="bbb4bfc784ac6e8df37d02e51f111018" pod="kube-system/kube-controller-manager-pause-249141"
	Dec 12 01:35:18 pause-249141 kubelet[1318]: E1212 01:35:18.730390    1318 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nlvxp\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="3ccbdf76-9f38-49c5-b877-2bd169aabd0f" pod="kube-system/kube-proxy-nlvxp"
	Dec 12 01:35:18 pause-249141 kubelet[1318]: E1212 01:35:18.730637    1318 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kindnet-5s7pz\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="99856c46-d02d-48bc-95ee-225a3ead3606" pod="kube-system/kindnet-5s7pz"
	Dec 12 01:35:26 pause-249141 kubelet[1318]: E1212 01:35:26.283024    1318 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-249141\" is forbidden: User \"system:node:pause-249141\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-249141' and this object" podUID="4d7163457b2f692953f8cb8b04ad4f04" pod="kube-system/kube-scheduler-pause-249141"
	Dec 12 01:35:26 pause-249141 kubelet[1318]: E1212 01:35:26.285573    1318 reflector.go:205] "Failed to watch" err="configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:pause-249141\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-249141' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Dec 12 01:35:26 pause-249141 kubelet[1318]: E1212 01:35:26.293223    1318 reflector.go:205] "Failed to watch" err="configmaps \"coredns\" is forbidden: User \"system:node:pause-249141\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-249141' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap"
	Dec 12 01:35:26 pause-249141 kubelet[1318]: E1212 01:35:26.294366    1318 reflector.go:205] "Failed to watch" err="configmaps \"kube-proxy\" is forbidden: User \"system:node:pause-249141\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-249141' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Dec 12 01:35:26 pause-249141 kubelet[1318]: E1212 01:35:26.299148    1318 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-pause-249141\" is forbidden: User \"system:node:pause-249141\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-249141' and this object" podUID="a4ce20392fdce9bbaaeba656027321a0" pod="kube-system/kube-apiserver-pause-249141"
	Dec 12 01:35:26 pause-249141 kubelet[1318]: E1212 01:35:26.318608    1318 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-249141\" is forbidden: User \"system:node:pause-249141\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-249141' and this object" podUID="bbb4bfc784ac6e8df37d02e51f111018" pod="kube-system/kube-controller-manager-pause-249141"
	Dec 12 01:35:26 pause-249141 kubelet[1318]: E1212 01:35:26.339006    1318 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-nlvxp\" is forbidden: User \"system:node:pause-249141\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-249141' and this object" podUID="3ccbdf76-9f38-49c5-b877-2bd169aabd0f" pod="kube-system/kube-proxy-nlvxp"
	Dec 12 01:35:26 pause-249141 kubelet[1318]: E1212 01:35:26.363659    1318 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-5s7pz\" is forbidden: User \"system:node:pause-249141\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-249141' and this object" podUID="99856c46-d02d-48bc-95ee-225a3ead3606" pod="kube-system/kindnet-5s7pz"
	Dec 12 01:35:26 pause-249141 kubelet[1318]: E1212 01:35:26.376245    1318 status_manager.go:1018] "Failed to get status for pod" err="pods \"coredns-66bc5c9577-5jwqj\" is forbidden: User \"system:node:pause-249141\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-249141' and this object" podUID="1a666390-6ced-45ed-9bce-50a39727a9f8" pod="kube-system/coredns-66bc5c9577-5jwqj"
	Dec 12 01:35:37 pause-249141 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 12 01:35:37 pause-249141 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 12 01:35:37 pause-249141 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-249141 -n pause-249141
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-249141 -n pause-249141: exit status 2 (448.931081ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context pause-249141 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect pause-249141
helpers_test.go:244: (dbg) docker inspect pause-249141:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "88cbebe85a31326ae544edffcecf680d495783edccaed9f3573511d3ed8ab2b8",
	        "Created": "2025-12-12T01:33:54.361995773Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 699411,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T01:33:54.424017771Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/88cbebe85a31326ae544edffcecf680d495783edccaed9f3573511d3ed8ab2b8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/88cbebe85a31326ae544edffcecf680d495783edccaed9f3573511d3ed8ab2b8/hostname",
	        "HostsPath": "/var/lib/docker/containers/88cbebe85a31326ae544edffcecf680d495783edccaed9f3573511d3ed8ab2b8/hosts",
	        "LogPath": "/var/lib/docker/containers/88cbebe85a31326ae544edffcecf680d495783edccaed9f3573511d3ed8ab2b8/88cbebe85a31326ae544edffcecf680d495783edccaed9f3573511d3ed8ab2b8-json.log",
	        "Name": "/pause-249141",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-249141:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-249141",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "88cbebe85a31326ae544edffcecf680d495783edccaed9f3573511d3ed8ab2b8",
	                "LowerDir": "/var/lib/docker/overlay2/9d2306f05d3ee51b32c40c92d5a66603d62319276a100bce921d8746bdffa8d7-init/diff:/var/lib/docker/overlay2/312acdcca8c5c90ada236fa0dd866f841348e5b8485928af37d3628cccc20197/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9d2306f05d3ee51b32c40c92d5a66603d62319276a100bce921d8746bdffa8d7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9d2306f05d3ee51b32c40c92d5a66603d62319276a100bce921d8746bdffa8d7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9d2306f05d3ee51b32c40c92d5a66603d62319276a100bce921d8746bdffa8d7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-249141",
	                "Source": "/var/lib/docker/volumes/pause-249141/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-249141",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-249141",
	                "name.minikube.sigs.k8s.io": "pause-249141",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4781a30b88a5045b0c4548e2c037c0f1fe8d0c6a5495f7cce10d21a92ca73841",
	            "SandboxKey": "/var/run/docker/netns/4781a30b88a5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33429"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33430"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33433"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33431"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33432"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-249141": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f2:e1:2a:b5:24:4b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "71b1cfc47b2b92d2839fa9051327318c5a1e86d53e80da924ee32224e2285d94",
	                    "EndpointID": "d90ed1690d0e1fc02e469afcc2ba48742dd94c90960c24153de9c1a6b9d72800",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-249141",
	                        "88cbebe85a31"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-249141 -n pause-249141
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-249141 -n pause-249141: exit status 2 (459.986069ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p pause-249141 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p pause-249141 logs -n 25: (1.925207465s)
helpers_test.go:261: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                      ARGS                                                                       │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-869533 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                           │ NoKubernetes-869533       │ jenkins │ v1.37.0 │ 12 Dec 25 01:21 UTC │ 12 Dec 25 01:23 UTC │
	│ start   │ -p missing-upgrade-812493 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ missing-upgrade-812493    │ jenkins │ v1.37.0 │ 12 Dec 25 01:21 UTC │ 12 Dec 25 01:22 UTC │
	│ delete  │ -p missing-upgrade-812493                                                                                                                       │ missing-upgrade-812493    │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │ 12 Dec 25 01:22 UTC │
	│ start   │ -p kubernetes-upgrade-224473 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio        │ kubernetes-upgrade-224473 │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │ 12 Dec 25 01:22 UTC │
	│ stop    │ -p kubernetes-upgrade-224473                                                                                                                    │ kubernetes-upgrade-224473 │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │ 12 Dec 25 01:22 UTC │
	│ start   │ -p kubernetes-upgrade-224473 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-224473 │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │                     │
	│ delete  │ -p NoKubernetes-869533                                                                                                                          │ NoKubernetes-869533       │ jenkins │ v1.37.0 │ 12 Dec 25 01:23 UTC │ 12 Dec 25 01:23 UTC │
	│ start   │ -p NoKubernetes-869533 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                           │ NoKubernetes-869533       │ jenkins │ v1.37.0 │ 12 Dec 25 01:23 UTC │ 12 Dec 25 01:23 UTC │
	│ ssh     │ -p NoKubernetes-869533 sudo systemctl is-active --quiet service kubelet                                                                         │ NoKubernetes-869533       │ jenkins │ v1.37.0 │ 12 Dec 25 01:23 UTC │                     │
	│ stop    │ -p NoKubernetes-869533                                                                                                                          │ NoKubernetes-869533       │ jenkins │ v1.37.0 │ 12 Dec 25 01:23 UTC │ 12 Dec 25 01:23 UTC │
	│ start   │ -p NoKubernetes-869533 --driver=docker  --container-runtime=crio                                                                                │ NoKubernetes-869533       │ jenkins │ v1.37.0 │ 12 Dec 25 01:23 UTC │ 12 Dec 25 01:23 UTC │
	│ ssh     │ -p NoKubernetes-869533 sudo systemctl is-active --quiet service kubelet                                                                         │ NoKubernetes-869533       │ jenkins │ v1.37.0 │ 12 Dec 25 01:23 UTC │                     │
	│ delete  │ -p NoKubernetes-869533                                                                                                                          │ NoKubernetes-869533       │ jenkins │ v1.37.0 │ 12 Dec 25 01:23 UTC │ 12 Dec 25 01:23 UTC │
	│ start   │ -p stopped-upgrade-204630 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                            │ stopped-upgrade-204630    │ jenkins │ v1.35.0 │ 12 Dec 25 01:23 UTC │ 12 Dec 25 01:24 UTC │
	│ stop    │ stopped-upgrade-204630 stop                                                                                                                     │ stopped-upgrade-204630    │ jenkins │ v1.35.0 │ 12 Dec 25 01:24 UTC │ 12 Dec 25 01:24 UTC │
	│ start   │ -p stopped-upgrade-204630 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ stopped-upgrade-204630    │ jenkins │ v1.37.0 │ 12 Dec 25 01:24 UTC │ 12 Dec 25 01:28 UTC │
	│ delete  │ -p stopped-upgrade-204630                                                                                                                       │ stopped-upgrade-204630    │ jenkins │ v1.37.0 │ 12 Dec 25 01:28 UTC │ 12 Dec 25 01:28 UTC │
	│ start   │ -p running-upgrade-260319 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                            │ running-upgrade-260319    │ jenkins │ v1.35.0 │ 12 Dec 25 01:28 UTC │ 12 Dec 25 01:29 UTC │
	│ start   │ -p running-upgrade-260319 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ running-upgrade-260319    │ jenkins │ v1.37.0 │ 12 Dec 25 01:29 UTC │ 12 Dec 25 01:33 UTC │
	│ delete  │ -p running-upgrade-260319                                                                                                                       │ running-upgrade-260319    │ jenkins │ v1.37.0 │ 12 Dec 25 01:33 UTC │ 12 Dec 25 01:33 UTC │
	│ start   │ -p pause-249141 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                       │ pause-249141              │ jenkins │ v1.37.0 │ 12 Dec 25 01:33 UTC │ 12 Dec 25 01:35 UTC │
	│ start   │ -p pause-249141 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                │ pause-249141              │ jenkins │ v1.37.0 │ 12 Dec 25 01:35 UTC │ 12 Dec 25 01:35 UTC │
	│ delete  │ -p kubernetes-upgrade-224473                                                                                                                    │ kubernetes-upgrade-224473 │ jenkins │ v1.37.0 │ 12 Dec 25 01:35 UTC │ 12 Dec 25 01:35 UTC │
	│ start   │ -p force-systemd-flag-272786 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                     │ force-systemd-flag-272786 │ jenkins │ v1.37.0 │ 12 Dec 25 01:35 UTC │                     │
	│ pause   │ -p pause-249141 --alsologtostderr -v=5                                                                                                          │ pause-249141              │ jenkins │ v1.37.0 │ 12 Dec 25 01:35 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 01:35:23
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 01:35:23.450987  703668 out.go:360] Setting OutFile to fd 1 ...
	I1212 01:35:23.451102  703668 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 01:35:23.451112  703668 out.go:374] Setting ErrFile to fd 2...
	I1212 01:35:23.451117  703668 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 01:35:23.451388  703668 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-487723/.minikube/bin
	I1212 01:35:23.451777  703668 out.go:368] Setting JSON to false
	I1212 01:35:23.452707  703668 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":15469,"bootTime":1765487855,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1212 01:35:23.452775  703668 start.go:143] virtualization:  
	I1212 01:35:23.457144  703668 out.go:179] * [force-systemd-flag-272786] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 01:35:23.460786  703668 out.go:179]   - MINIKUBE_LOCATION=22101
	I1212 01:35:23.460950  703668 notify.go:221] Checking for updates...
	I1212 01:35:23.468238  703668 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 01:35:23.471751  703668 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22101-487723/kubeconfig
	I1212 01:35:23.474931  703668 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-487723/.minikube
	I1212 01:35:23.478269  703668 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 01:35:23.481424  703668 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 01:35:23.485225  703668 config.go:182] Loaded profile config "pause-249141": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 01:35:23.485382  703668 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 01:35:23.539677  703668 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 01:35:23.539810  703668 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 01:35:23.651743  703668 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-12 01:35:23.636314483 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 01:35:23.651845  703668 docker.go:319] overlay module found
	I1212 01:35:23.655237  703668 out.go:179] * Using the docker driver based on user configuration
	I1212 01:35:23.658208  703668 start.go:309] selected driver: docker
	I1212 01:35:23.658223  703668 start.go:927] validating driver "docker" against <nil>
	I1212 01:35:23.658235  703668 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 01:35:23.658938  703668 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 01:35:23.785346  703668 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-12 01:35:23.773402814 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 01:35:23.785502  703668 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1212 01:35:23.785709  703668 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1212 01:35:23.789062  703668 out.go:179] * Using Docker driver with root privileges
	I1212 01:35:23.792143  703668 cni.go:84] Creating CNI manager for ""
	I1212 01:35:23.792210  703668 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 01:35:23.792218  703668 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1212 01:35:23.792297  703668 start.go:353] cluster config:
	{Name:force-systemd-flag-272786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:force-systemd-flag-272786 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 01:35:23.795599  703668 out.go:179] * Starting "force-systemd-flag-272786" primary control-plane node in "force-systemd-flag-272786" cluster
	I1212 01:35:23.798488  703668 cache.go:134] Beginning downloading kic base image for docker with crio
	I1212 01:35:23.801379  703668 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1212 01:35:23.804150  703668 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 01:35:23.804193  703668 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22101-487723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1212 01:35:23.804214  703668 cache.go:65] Caching tarball of preloaded images
	I1212 01:35:23.804219  703668 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1212 01:35:23.804301  703668 preload.go:238] Found /home/jenkins/minikube-integration/22101-487723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1212 01:35:23.804312  703668 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1212 01:35:23.804443  703668 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/force-systemd-flag-272786/config.json ...
	I1212 01:35:23.804462  703668 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/force-systemd-flag-272786/config.json: {Name:mk0235976c494188d16fb0f0a2c8ac936db70904 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:35:23.832530  703668 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1212 01:35:23.832555  703668 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1212 01:35:23.832571  703668 cache.go:243] Successfully downloaded all kic artifacts
	I1212 01:35:23.832599  703668 start.go:360] acquireMachinesLock for force-systemd-flag-272786: {Name:mkd74d51b820f71e81e014063b2059fe6d4e2a6d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 01:35:23.832707  703668 start.go:364] duration metric: took 87.481µs to acquireMachinesLock for "force-systemd-flag-272786"
	I1212 01:35:23.832739  703668 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-272786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:force-systemd-flag-272786 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 01:35:23.832818  703668 start.go:125] createHost starting for "" (driver="docker")
	I1212 01:35:26.320874  701970 node_ready.go:49] node "pause-249141" is "Ready"
	I1212 01:35:26.320900  701970 node_ready.go:38] duration metric: took 7.385593977s for node "pause-249141" to be "Ready" ...
	I1212 01:35:26.320918  701970 api_server.go:52] waiting for apiserver process to appear ...
	I1212 01:35:26.320979  701970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:26.341021  701970 api_server.go:72] duration metric: took 7.827340572s to wait for apiserver process to appear ...
	I1212 01:35:26.341043  701970 api_server.go:88] waiting for apiserver healthz status ...
	I1212 01:35:26.341062  701970 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1212 01:35:26.362458  701970 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 01:35:26.362529  701970 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 01:35:26.841763  701970 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1212 01:35:26.853739  701970 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 01:35:26.853809  701970 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 01:35:27.341196  701970 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1212 01:35:27.351169  701970 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 01:35:27.351208  701970 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 01:35:27.841916  701970 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1212 01:35:27.850409  701970 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1212 01:35:27.852386  701970 api_server.go:141] control plane version: v1.34.2
	I1212 01:35:27.852420  701970 api_server.go:131] duration metric: took 1.511369891s to wait for apiserver health ...
	I1212 01:35:27.852431  701970 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 01:35:27.857674  701970 system_pods.go:59] 7 kube-system pods found
	I1212 01:35:27.857714  701970 system_pods.go:61] "coredns-66bc5c9577-5jwqj" [1a666390-6ced-45ed-9bce-50a39727a9f8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 01:35:27.857724  701970 system_pods.go:61] "etcd-pause-249141" [3046d2d9-2d8b-44c1-8d60-385302cce3b1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 01:35:27.857729  701970 system_pods.go:61] "kindnet-5s7pz" [99856c46-d02d-48bc-95ee-225a3ead3606] Running
	I1212 01:35:27.857735  701970 system_pods.go:61] "kube-apiserver-pause-249141" [f453ee7d-27ab-4f08-86fb-7539d51d38ae] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 01:35:27.857742  701970 system_pods.go:61] "kube-controller-manager-pause-249141" [ed329836-67de-4e56-bd9f-7884d50f0656] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 01:35:27.857747  701970 system_pods.go:61] "kube-proxy-nlvxp" [3ccbdf76-9f38-49c5-b877-2bd169aabd0f] Running
	I1212 01:35:27.857752  701970 system_pods.go:61] "kube-scheduler-pause-249141" [3d3e7b33-960a-41c4-9d6a-c1e02d98ce86] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 01:35:27.857758  701970 system_pods.go:74] duration metric: took 5.321603ms to wait for pod list to return data ...
	I1212 01:35:27.857770  701970 default_sa.go:34] waiting for default service account to be created ...
	I1212 01:35:27.860325  701970 default_sa.go:45] found service account: "default"
	I1212 01:35:27.860351  701970 default_sa.go:55] duration metric: took 2.571835ms for default service account to be created ...
	I1212 01:35:27.860360  701970 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 01:35:27.863384  701970 system_pods.go:86] 7 kube-system pods found
	I1212 01:35:27.863431  701970 system_pods.go:89] "coredns-66bc5c9577-5jwqj" [1a666390-6ced-45ed-9bce-50a39727a9f8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 01:35:27.863447  701970 system_pods.go:89] "etcd-pause-249141" [3046d2d9-2d8b-44c1-8d60-385302cce3b1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 01:35:27.863454  701970 system_pods.go:89] "kindnet-5s7pz" [99856c46-d02d-48bc-95ee-225a3ead3606] Running
	I1212 01:35:27.863461  701970 system_pods.go:89] "kube-apiserver-pause-249141" [f453ee7d-27ab-4f08-86fb-7539d51d38ae] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 01:35:27.863473  701970 system_pods.go:89] "kube-controller-manager-pause-249141" [ed329836-67de-4e56-bd9f-7884d50f0656] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 01:35:27.863484  701970 system_pods.go:89] "kube-proxy-nlvxp" [3ccbdf76-9f38-49c5-b877-2bd169aabd0f] Running
	I1212 01:35:27.863490  701970 system_pods.go:89] "kube-scheduler-pause-249141" [3d3e7b33-960a-41c4-9d6a-c1e02d98ce86] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 01:35:27.863501  701970 system_pods.go:126] duration metric: took 3.134677ms to wait for k8s-apps to be running ...
	I1212 01:35:27.863509  701970 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 01:35:27.863577  701970 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 01:35:27.878753  701970 system_svc.go:56] duration metric: took 15.234841ms WaitForService to wait for kubelet
	I1212 01:35:27.878780  701970 kubeadm.go:587] duration metric: took 9.365104158s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 01:35:27.878797  701970 node_conditions.go:102] verifying NodePressure condition ...
	I1212 01:35:27.882426  701970 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1212 01:35:27.882461  701970 node_conditions.go:123] node cpu capacity is 2
	I1212 01:35:27.882474  701970 node_conditions.go:105] duration metric: took 3.671731ms to run NodePressure ...
	I1212 01:35:27.882487  701970 start.go:242] waiting for startup goroutines ...
	I1212 01:35:27.882494  701970 start.go:247] waiting for cluster config update ...
	I1212 01:35:27.882503  701970 start.go:256] writing updated cluster config ...
	I1212 01:35:27.882868  701970 ssh_runner.go:195] Run: rm -f paused
	I1212 01:35:27.888098  701970 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 01:35:27.888615  701970 kapi.go:59] client config for pause-249141: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22101-487723/.minikube/profiles/pause-249141/client.crt", KeyFile:"/home/jenkins/minikube-integration/22101-487723/.minikube/profiles/pause-249141/client.key", CAFile:"/home/jenkins/minikube-integration/22101-487723/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 01:35:27.892394  701970 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-5jwqj" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 01:35:23.836301  703668 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1212 01:35:23.836543  703668 start.go:159] libmachine.API.Create for "force-systemd-flag-272786" (driver="docker")
	I1212 01:35:23.836580  703668 client.go:173] LocalClient.Create starting
	I1212 01:35:23.836651  703668 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca.pem
	I1212 01:35:23.836693  703668 main.go:143] libmachine: Decoding PEM data...
	I1212 01:35:23.836713  703668 main.go:143] libmachine: Parsing certificate...
	I1212 01:35:23.836766  703668 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22101-487723/.minikube/certs/cert.pem
	I1212 01:35:23.836788  703668 main.go:143] libmachine: Decoding PEM data...
	I1212 01:35:23.836805  703668 main.go:143] libmachine: Parsing certificate...
	I1212 01:35:23.837183  703668 cli_runner.go:164] Run: docker network inspect force-systemd-flag-272786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1212 01:35:23.868951  703668 cli_runner.go:211] docker network inspect force-systemd-flag-272786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1212 01:35:23.869030  703668 network_create.go:284] running [docker network inspect force-systemd-flag-272786] to gather additional debugging logs...
	I1212 01:35:23.869057  703668 cli_runner.go:164] Run: docker network inspect force-systemd-flag-272786
	W1212 01:35:23.903092  703668 cli_runner.go:211] docker network inspect force-systemd-flag-272786 returned with exit code 1
	I1212 01:35:23.903126  703668 network_create.go:287] error running [docker network inspect force-systemd-flag-272786]: docker network inspect force-systemd-flag-272786: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-272786 not found
	I1212 01:35:23.903150  703668 network_create.go:289] output of [docker network inspect force-systemd-flag-272786]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-272786 not found
	
	** /stderr **
	I1212 01:35:23.903288  703668 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 01:35:23.930433  703668 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-987f53aa9676 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:c6:59:9a:7d:dd:1e} reservation:<nil>}
	I1212 01:35:23.930752  703668 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-3f096d49a95b IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:fa:06:56:75:08:cc} reservation:<nil>}
	I1212 01:35:23.931024  703668 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-0506280b338c IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:0e:9b:ca:19:ce:5d} reservation:<nil>}
	I1212 01:35:23.931310  703668 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-71b1cfc47b2b IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:b6:24:cd:de:f9:73} reservation:<nil>}
	I1212 01:35:23.931735  703668 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019d5c40}
	I1212 01:35:23.931760  703668 network_create.go:124] attempt to create docker network force-systemd-flag-272786 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1212 01:35:23.931823  703668 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-272786 force-systemd-flag-272786
	I1212 01:35:24.027366  703668 network_create.go:108] docker network force-systemd-flag-272786 192.168.85.0/24 created
	I1212 01:35:24.027406  703668 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-flag-272786" container
	I1212 01:35:24.027480  703668 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1212 01:35:24.062073  703668 cli_runner.go:164] Run: docker volume create force-systemd-flag-272786 --label name.minikube.sigs.k8s.io=force-systemd-flag-272786 --label created_by.minikube.sigs.k8s.io=true
	I1212 01:35:24.094272  703668 oci.go:103] Successfully created a docker volume force-systemd-flag-272786
	I1212 01:35:24.094374  703668 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-272786-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-272786 --entrypoint /usr/bin/test -v force-systemd-flag-272786:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1212 01:35:24.763043  703668 oci.go:107] Successfully prepared a docker volume force-systemd-flag-272786
	I1212 01:35:24.763105  703668 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 01:35:24.763114  703668 kic.go:194] Starting extracting preloaded images to volume ...
	I1212 01:35:24.763188  703668 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22101-487723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-272786:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	W1212 01:35:29.957261  701970 pod_ready.go:104] pod "coredns-66bc5c9577-5jwqj" is not "Ready", error: <nil>
	I1212 01:35:30.898086  701970 pod_ready.go:94] pod "coredns-66bc5c9577-5jwqj" is "Ready"
	I1212 01:35:30.898118  701970 pod_ready.go:86] duration metric: took 3.005701499s for pod "coredns-66bc5c9577-5jwqj" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 01:35:30.905086  701970 pod_ready.go:83] waiting for pod "etcd-pause-249141" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 01:35:30.909375  701970 pod_ready.go:94] pod "etcd-pause-249141" is "Ready"
	I1212 01:35:30.909405  701970 pod_ready.go:86] duration metric: took 4.289923ms for pod "etcd-pause-249141" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 01:35:30.911459  701970 pod_ready.go:83] waiting for pod "kube-apiserver-pause-249141" in "kube-system" namespace to be "Ready" or be gone ...
	W1212 01:35:32.916710  701970 pod_ready.go:104] pod "kube-apiserver-pause-249141" is not "Ready", error: <nil>
	I1212 01:35:28.988164  703668 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22101-487723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-272786:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (4.224940156s)
	I1212 01:35:28.988205  703668 kic.go:203] duration metric: took 4.225087147s to extract preloaded images to volume ...
	W1212 01:35:28.988359  703668 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1212 01:35:28.988468  703668 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1212 01:35:29.045491  703668 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-272786 --name force-systemd-flag-272786 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-272786 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-272786 --network force-systemd-flag-272786 --ip 192.168.85.2 --volume force-systemd-flag-272786:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1212 01:35:29.355225  703668 cli_runner.go:164] Run: docker container inspect force-systemd-flag-272786 --format={{.State.Running}}
	I1212 01:35:29.376194  703668 cli_runner.go:164] Run: docker container inspect force-systemd-flag-272786 --format={{.State.Status}}
	I1212 01:35:29.405354  703668 cli_runner.go:164] Run: docker exec force-systemd-flag-272786 stat /var/lib/dpkg/alternatives/iptables
	I1212 01:35:29.467675  703668 oci.go:144] the created container "force-systemd-flag-272786" has a running status.
	I1212 01:35:29.467713  703668 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22101-487723/.minikube/machines/force-systemd-flag-272786/id_rsa...
	I1212 01:35:29.734937  703668 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/machines/force-systemd-flag-272786/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1212 01:35:29.734989  703668 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22101-487723/.minikube/machines/force-systemd-flag-272786/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1212 01:35:29.770721  703668 cli_runner.go:164] Run: docker container inspect force-systemd-flag-272786 --format={{.State.Status}}
	I1212 01:35:29.798833  703668 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1212 01:35:29.798854  703668 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-272786 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1212 01:35:29.908743  703668 cli_runner.go:164] Run: docker container inspect force-systemd-flag-272786 --format={{.State.Status}}
	I1212 01:35:29.946651  703668 machine.go:94] provisionDockerMachine start ...
	I1212 01:35:29.946749  703668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-272786
	I1212 01:35:29.981782  703668 main.go:143] libmachine: Using SSH client type: native
	I1212 01:35:29.982133  703668 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33434 <nil> <nil>}
	I1212 01:35:29.982144  703668 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 01:35:29.982754  703668 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59984->127.0.0.1:33434: read: connection reset by peer
	I1212 01:35:33.134445  703668 main.go:143] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-272786
	
	I1212 01:35:33.134470  703668 ubuntu.go:182] provisioning hostname "force-systemd-flag-272786"
	I1212 01:35:33.134534  703668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-272786
	I1212 01:35:33.152340  703668 main.go:143] libmachine: Using SSH client type: native
	I1212 01:35:33.152652  703668 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33434 <nil> <nil>}
	I1212 01:35:33.152669  703668 main.go:143] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-272786 && echo "force-systemd-flag-272786" | sudo tee /etc/hostname
	I1212 01:35:33.320347  703668 main.go:143] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-272786
	
	I1212 01:35:33.320429  703668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-272786
	I1212 01:35:33.338183  703668 main.go:143] libmachine: Using SSH client type: native
	I1212 01:35:33.338489  703668 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33434 <nil> <nil>}
	I1212 01:35:33.338510  703668 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-flag-272786' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-272786/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-flag-272786' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 01:35:33.486926  703668 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 01:35:33.486963  703668 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22101-487723/.minikube CaCertPath:/home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22101-487723/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22101-487723/.minikube}
	I1212 01:35:33.487009  703668 ubuntu.go:190] setting up certificates
	I1212 01:35:33.487017  703668 provision.go:84] configureAuth start
	I1212 01:35:33.487086  703668 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-272786
	I1212 01:35:33.504886  703668 provision.go:143] copyHostCerts
	I1212 01:35:33.504935  703668 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22101-487723/.minikube/cert.pem
	I1212 01:35:33.504970  703668 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-487723/.minikube/cert.pem, removing ...
	I1212 01:35:33.504977  703668 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-487723/.minikube/cert.pem
	I1212 01:35:33.505057  703668 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22101-487723/.minikube/cert.pem (1123 bytes)
	I1212 01:35:33.505141  703668 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22101-487723/.minikube/key.pem
	I1212 01:35:33.505157  703668 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-487723/.minikube/key.pem, removing ...
	I1212 01:35:33.505162  703668 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-487723/.minikube/key.pem
	I1212 01:35:33.505187  703668 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22101-487723/.minikube/key.pem (1679 bytes)
	I1212 01:35:33.505235  703668 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22101-487723/.minikube/ca.pem
	I1212 01:35:33.505251  703668 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-487723/.minikube/ca.pem, removing ...
	I1212 01:35:33.505259  703668 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-487723/.minikube/ca.pem
	I1212 01:35:33.505282  703668 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22101-487723/.minikube/ca.pem (1078 bytes)
	I1212 01:35:33.505338  703668 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22101-487723/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-272786 san=[127.0.0.1 192.168.85.2 force-systemd-flag-272786 localhost minikube]
	I1212 01:35:33.680054  703668 provision.go:177] copyRemoteCerts
	I1212 01:35:33.680140  703668 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 01:35:33.680182  703668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-272786
	I1212 01:35:33.699306  703668 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33434 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/force-systemd-flag-272786/id_rsa Username:docker}
	I1212 01:35:33.806714  703668 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1212 01:35:33.806773  703668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1212 01:35:33.825001  703668 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1212 01:35:33.825095  703668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 01:35:33.843609  703668 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1212 01:35:33.843696  703668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 01:35:33.863654  703668 provision.go:87] duration metric: took 376.613968ms to configureAuth
	I1212 01:35:33.863682  703668 ubuntu.go:206] setting minikube options for container-runtime
	I1212 01:35:33.863914  703668 config.go:182] Loaded profile config "force-systemd-flag-272786": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 01:35:33.864022  703668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-272786
	I1212 01:35:33.881722  703668 main.go:143] libmachine: Using SSH client type: native
	I1212 01:35:33.882037  703668 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33434 <nil> <nil>}
	I1212 01:35:33.882051  703668 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 01:35:34.179565  703668 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 01:35:34.179589  703668 machine.go:97] duration metric: took 4.232914981s to provisionDockerMachine
	I1212 01:35:34.179601  703668 client.go:176] duration metric: took 10.343010597s to LocalClient.Create
	I1212 01:35:34.179615  703668 start.go:167] duration metric: took 10.343073169s to libmachine.API.Create "force-systemd-flag-272786"
	I1212 01:35:34.179627  703668 start.go:293] postStartSetup for "force-systemd-flag-272786" (driver="docker")
	I1212 01:35:34.179637  703668 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 01:35:34.179705  703668 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 01:35:34.179755  703668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-272786
	I1212 01:35:34.198354  703668 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33434 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/force-systemd-flag-272786/id_rsa Username:docker}
	I1212 01:35:34.302942  703668 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 01:35:34.306278  703668 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 01:35:34.306303  703668 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 01:35:34.306316  703668 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-487723/.minikube/addons for local assets ...
	I1212 01:35:34.306370  703668 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-487723/.minikube/files for local assets ...
	I1212 01:35:34.306449  703668 filesync.go:149] local asset: /home/jenkins/minikube-integration/22101-487723/.minikube/files/etc/ssl/certs/4909542.pem -> 4909542.pem in /etc/ssl/certs
	I1212 01:35:34.306456  703668 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/files/etc/ssl/certs/4909542.pem -> /etc/ssl/certs/4909542.pem
	I1212 01:35:34.306557  703668 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 01:35:34.314264  703668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/files/etc/ssl/certs/4909542.pem --> /etc/ssl/certs/4909542.pem (1708 bytes)
	I1212 01:35:34.332613  703668 start.go:296] duration metric: took 152.972381ms for postStartSetup
	I1212 01:35:34.333021  703668 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-272786
	I1212 01:35:34.349693  703668 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/force-systemd-flag-272786/config.json ...
	I1212 01:35:34.349986  703668 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 01:35:34.350040  703668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-272786
	I1212 01:35:34.367298  703668 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33434 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/force-systemd-flag-272786/id_rsa Username:docker}
	I1212 01:35:34.472017  703668 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 01:35:34.476574  703668 start.go:128] duration metric: took 10.643741758s to createHost
	I1212 01:35:34.476601  703668 start.go:83] releasing machines lock for "force-systemd-flag-272786", held for 10.643878944s
	I1212 01:35:34.476672  703668 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-272786
	I1212 01:35:34.494056  703668 ssh_runner.go:195] Run: cat /version.json
	I1212 01:35:34.494091  703668 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 01:35:34.494105  703668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-272786
	I1212 01:35:34.494144  703668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-272786
	I1212 01:35:34.519554  703668 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33434 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/force-systemd-flag-272786/id_rsa Username:docker}
	I1212 01:35:34.520741  703668 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33434 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/force-systemd-flag-272786/id_rsa Username:docker}
	I1212 01:35:34.622124  703668 ssh_runner.go:195] Run: systemctl --version
	I1212 01:35:34.714813  703668 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 01:35:34.755898  703668 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 01:35:34.760538  703668 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 01:35:34.760609  703668 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 01:35:34.789434  703668 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1212 01:35:34.789462  703668 start.go:496] detecting cgroup driver to use...
	I1212 01:35:34.789477  703668 start.go:500] using "systemd" cgroup driver as enforced via flags
	I1212 01:35:34.789896  703668 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 01:35:34.813518  703668 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 01:35:34.826763  703668 docker.go:218] disabling cri-docker service (if available) ...
	I1212 01:35:34.826823  703668 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 01:35:34.843750  703668 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 01:35:34.861767  703668 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 01:35:34.989827  703668 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 01:35:35.127609  703668 docker.go:234] disabling docker service ...
	I1212 01:35:35.127709  703668 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 01:35:35.149841  703668 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 01:35:35.164328  703668 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 01:35:35.299054  703668 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 01:35:35.442750  703668 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 01:35:35.456841  703668 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 01:35:35.472718  703668 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1212 01:35:35.472822  703668 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:35:35.482122  703668 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1212 01:35:35.482241  703668 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:35:35.492031  703668 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:35:35.501276  703668 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:35:35.510287  703668 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 01:35:35.518627  703668 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:35:35.527801  703668 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:35:35.541620  703668 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:35:35.550187  703668 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 01:35:35.558397  703668 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 01:35:35.565825  703668 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:35:35.683407  703668 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 01:35:35.855526  703668 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 01:35:35.855666  703668 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 01:35:35.860180  703668 start.go:564] Will wait 60s for crictl version
	I1212 01:35:35.860298  703668 ssh_runner.go:195] Run: which crictl
	I1212 01:35:35.864171  703668 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 01:35:35.888718  703668 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1212 01:35:35.888878  703668 ssh_runner.go:195] Run: crio --version
	I1212 01:35:35.921252  703668 ssh_runner.go:195] Run: crio --version
	I1212 01:35:35.956571  703668 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	W1212 01:35:34.919908  701970 pod_ready.go:104] pod "kube-apiserver-pause-249141" is not "Ready", error: <nil>
	I1212 01:35:36.922751  701970 pod_ready.go:94] pod "kube-apiserver-pause-249141" is "Ready"
	I1212 01:35:36.922783  701970 pod_ready.go:86] duration metric: took 6.011298613s for pod "kube-apiserver-pause-249141" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 01:35:36.927150  701970 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-249141" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 01:35:36.934059  701970 pod_ready.go:94] pod "kube-controller-manager-pause-249141" is "Ready"
	I1212 01:35:36.934083  701970 pod_ready.go:86] duration metric: took 6.899682ms for pod "kube-controller-manager-pause-249141" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 01:35:36.937607  701970 pod_ready.go:83] waiting for pod "kube-proxy-nlvxp" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 01:35:36.944732  701970 pod_ready.go:94] pod "kube-proxy-nlvxp" is "Ready"
	I1212 01:35:36.944806  701970 pod_ready.go:86] duration metric: took 7.177173ms for pod "kube-proxy-nlvxp" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 01:35:36.948381  701970 pod_ready.go:83] waiting for pod "kube-scheduler-pause-249141" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 01:35:37.116355  701970 pod_ready.go:94] pod "kube-scheduler-pause-249141" is "Ready"
	I1212 01:35:37.116407  701970 pod_ready.go:86] duration metric: took 167.949341ms for pod "kube-scheduler-pause-249141" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 01:35:37.116427  701970 pod_ready.go:40] duration metric: took 9.228286921s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 01:35:37.213922  701970 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1212 01:35:37.219452  701970 out.go:179] * Done! kubectl is now configured to use "pause-249141" cluster and "default" namespace by default
	I1212 01:35:35.959386  703668 cli_runner.go:164] Run: docker network inspect force-systemd-flag-272786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 01:35:35.974728  703668 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1212 01:35:35.978732  703668 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 01:35:35.988398  703668 kubeadm.go:884] updating cluster {Name:force-systemd-flag-272786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:force-systemd-flag-272786 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 01:35:35.988512  703668 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 01:35:35.988566  703668 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 01:35:36.025116  703668 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 01:35:36.025138  703668 crio.go:433] Images already preloaded, skipping extraction
	I1212 01:35:36.025204  703668 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 01:35:36.050850  703668 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 01:35:36.050870  703668 cache_images.go:86] Images are preloaded, skipping loading
	I1212 01:35:36.050877  703668 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.2 crio true true} ...
	I1212 01:35:36.050971  703668 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=force-systemd-flag-272786 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:force-systemd-flag-272786 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 01:35:36.051056  703668 ssh_runner.go:195] Run: crio config
	I1212 01:35:36.132675  703668 cni.go:84] Creating CNI manager for ""
	I1212 01:35:36.132699  703668 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 01:35:36.132714  703668 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 01:35:36.132757  703668 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-flag-272786 NodeName:force-systemd-flag-272786 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 01:35:36.132927  703668 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "force-systemd-flag-272786"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 01:35:36.133007  703668 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1212 01:35:36.141149  703668 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 01:35:36.141219  703668 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 01:35:36.149641  703668 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1212 01:35:36.162771  703668 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 01:35:36.181393  703668 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1212 01:35:36.195784  703668 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1212 01:35:36.204060  703668 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 01:35:36.214259  703668 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:35:36.332323  703668 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 01:35:36.349205  703668 certs.go:69] Setting up /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/force-systemd-flag-272786 for IP: 192.168.85.2
	I1212 01:35:36.349228  703668 certs.go:195] generating shared ca certs ...
	I1212 01:35:36.349244  703668 certs.go:227] acquiring lock for ca certs: {Name:mk856824cf2126fa3d2975ef18e195b6ab1234f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:35:36.349380  703668 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22101-487723/.minikube/ca.key
	I1212 01:35:36.349430  703668 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22101-487723/.minikube/proxy-client-ca.key
	I1212 01:35:36.349442  703668 certs.go:257] generating profile certs ...
	I1212 01:35:36.349498  703668 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/force-systemd-flag-272786/client.key
	I1212 01:35:36.349525  703668 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/force-systemd-flag-272786/client.crt with IP's: []
	I1212 01:35:36.502060  703668 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/force-systemd-flag-272786/client.crt ...
	I1212 01:35:36.502095  703668 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/force-systemd-flag-272786/client.crt: {Name:mk0a971d98a200fe256e3ba9ef91e5c3f2cf41f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:35:36.502298  703668 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/force-systemd-flag-272786/client.key ...
	I1212 01:35:36.502315  703668 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/force-systemd-flag-272786/client.key: {Name:mk88b66a5b32881eee1f97d047f50f89de357f85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:35:36.502415  703668 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/force-systemd-flag-272786/apiserver.key.8a6cec44
	I1212 01:35:36.502433  703668 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/force-systemd-flag-272786/apiserver.crt.8a6cec44 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1212 01:35:36.672507  703668 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/force-systemd-flag-272786/apiserver.crt.8a6cec44 ...
	I1212 01:35:36.672535  703668 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/force-systemd-flag-272786/apiserver.crt.8a6cec44: {Name:mk6e672e56b809191e2dadd9ac8d967bc71e3eaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:35:36.672715  703668 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/force-systemd-flag-272786/apiserver.key.8a6cec44 ...
	I1212 01:35:36.672728  703668 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/force-systemd-flag-272786/apiserver.key.8a6cec44: {Name:mkebe8e696a445ea7af787f561fbd60b7f81dbe3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:35:36.672818  703668 certs.go:382] copying /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/force-systemd-flag-272786/apiserver.crt.8a6cec44 -> /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/force-systemd-flag-272786/apiserver.crt
	I1212 01:35:36.672901  703668 certs.go:386] copying /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/force-systemd-flag-272786/apiserver.key.8a6cec44 -> /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/force-systemd-flag-272786/apiserver.key
	I1212 01:35:36.672966  703668 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/force-systemd-flag-272786/proxy-client.key
	I1212 01:35:36.672984  703668 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/force-systemd-flag-272786/proxy-client.crt with IP's: []
	I1212 01:35:36.913439  703668 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/force-systemd-flag-272786/proxy-client.crt ...
	I1212 01:35:36.913473  703668 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/force-systemd-flag-272786/proxy-client.crt: {Name:mk9e1c5a1656642b05fdb99fa8ff9a4562274554 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:35:36.913690  703668 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/force-systemd-flag-272786/proxy-client.key ...
	I1212 01:35:36.913716  703668 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/force-systemd-flag-272786/proxy-client.key: {Name:mk35c1fddb0c3101aab50410fe79a850daca3fd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:35:36.913816  703668 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 01:35:36.913840  703668 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1212 01:35:36.913853  703668 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 01:35:36.913879  703668 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 01:35:36.913895  703668 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/force-systemd-flag-272786/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1212 01:35:36.913939  703668 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/force-systemd-flag-272786/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1212 01:35:36.913957  703668 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/force-systemd-flag-272786/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1212 01:35:36.913989  703668 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/force-systemd-flag-272786/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1212 01:35:36.914044  703668 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/490954.pem (1338 bytes)
	W1212 01:35:36.914085  703668 certs.go:480] ignoring /home/jenkins/minikube-integration/22101-487723/.minikube/certs/490954_empty.pem, impossibly tiny 0 bytes
	I1212 01:35:36.914094  703668 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca-key.pem (1679 bytes)
	I1212 01:35:36.914126  703668 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/ca.pem (1078 bytes)
	I1212 01:35:36.914159  703668 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/cert.pem (1123 bytes)
	I1212 01:35:36.914190  703668 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/key.pem (1679 bytes)
	I1212 01:35:36.914236  703668 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-487723/.minikube/files/etc/ssl/certs/4909542.pem (1708 bytes)
	I1212 01:35:36.914275  703668 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:35:36.914298  703668 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/certs/490954.pem -> /usr/share/ca-certificates/490954.pem
	I1212 01:35:36.914315  703668 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-487723/.minikube/files/etc/ssl/certs/4909542.pem -> /usr/share/ca-certificates/4909542.pem
	I1212 01:35:36.914896  703668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 01:35:36.936710  703668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 01:35:36.957620  703668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 01:35:36.976804  703668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 01:35:36.997173  703668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/force-systemd-flag-272786/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1212 01:35:37.019228  703668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/force-systemd-flag-272786/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 01:35:37.040489  703668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/force-systemd-flag-272786/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 01:35:37.058510  703668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/force-systemd-flag-272786/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 01:35:37.080237  703668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 01:35:37.098455  703668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/certs/490954.pem --> /usr/share/ca-certificates/490954.pem (1338 bytes)
	I1212 01:35:37.117307  703668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-487723/.minikube/files/etc/ssl/certs/4909542.pem --> /usr/share/ca-certificates/4909542.pem (1708 bytes)
	I1212 01:35:37.136488  703668 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 01:35:37.149481  703668 ssh_runner.go:195] Run: openssl version
	I1212 01:35:37.156509  703668 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/490954.pem
	I1212 01:35:37.164673  703668 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/490954.pem /etc/ssl/certs/490954.pem
	I1212 01:35:37.183640  703668 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/490954.pem
	I1212 01:35:37.188610  703668 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 00:21 /usr/share/ca-certificates/490954.pem
	I1212 01:35:37.188737  703668 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/490954.pem
	I1212 01:35:37.241458  703668 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 01:35:37.276611  703668 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/490954.pem /etc/ssl/certs/51391683.0
	I1212 01:35:37.293724  703668 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4909542.pem
	I1212 01:35:37.305636  703668 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4909542.pem /etc/ssl/certs/4909542.pem
	I1212 01:35:37.331874  703668 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4909542.pem
	I1212 01:35:37.340244  703668 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 00:21 /usr/share/ca-certificates/4909542.pem
	I1212 01:35:37.340319  703668 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4909542.pem
	I1212 01:35:37.416295  703668 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 01:35:37.427292  703668 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4909542.pem /etc/ssl/certs/3ec20f2e.0
	I1212 01:35:37.439024  703668 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:35:37.455447  703668 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 01:35:37.464724  703668 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:35:37.471953  703668 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 00:11 /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:35:37.472022  703668 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:35:37.519179  703668 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 01:35:37.530460  703668 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1212 01:35:37.538128  703668 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 01:35:37.543493  703668 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1212 01:35:37.543545  703668 kubeadm.go:401] StartCluster: {Name:force-systemd-flag-272786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:force-systemd-flag-272786 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 01:35:37.543618  703668 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 01:35:37.543678  703668 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 01:35:37.574014  703668 cri.go:89] found id: ""
	I1212 01:35:37.574079  703668 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 01:35:37.583959  703668 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 01:35:37.595745  703668 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 01:35:37.595806  703668 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 01:35:37.606671  703668 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 01:35:37.606720  703668 kubeadm.go:158] found existing configuration files:
	
	I1212 01:35:37.606768  703668 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 01:35:37.620678  703668 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 01:35:37.620738  703668 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 01:35:37.630328  703668 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 01:35:37.641154  703668 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 01:35:37.641267  703668 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 01:35:37.651303  703668 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 01:35:37.661701  703668 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 01:35:37.661806  703668 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 01:35:37.671854  703668 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 01:35:37.684040  703668 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 01:35:37.684136  703668 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 01:35:37.700397  703668 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 01:35:37.761723  703668 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1212 01:35:37.762126  703668 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 01:35:37.807149  703668 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 01:35:37.807241  703668 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1212 01:35:37.807290  703668 kubeadm.go:319] OS: Linux
	I1212 01:35:37.807355  703668 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 01:35:37.807417  703668 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 01:35:37.807468  703668 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 01:35:37.807532  703668 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 01:35:37.807614  703668 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 01:35:37.807681  703668 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 01:35:37.807740  703668 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 01:35:37.807800  703668 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 01:35:37.807862  703668 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 01:35:37.894776  703668 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 01:35:37.894892  703668 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 01:35:37.894997  703668 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 01:35:37.911117  703668 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 01:35:37.916385  703668 out.go:252]   - Generating certificates and keys ...
	I1212 01:35:37.916505  703668 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 01:35:37.916582  703668 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 01:35:38.377878  703668 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	
	
	==> CRI-O <==
	Dec 12 01:35:18 pause-249141 crio[2078]: time="2025-12-12T01:35:18.910158768Z" level=info msg="Started container" PID=2368 containerID=06ed94bd38e372eff5b039f17f5b555d55348a17a2f305a892bbefaccb2dff7d description=kube-system/etcd-pause-249141/etcd id=54e8ec35-57fe-4cf9-99c5-579c4c6cb3e5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=dbb2472c051010ed79b38e136567db0a0337b52feb4d26e960eef74b961ae160
	Dec 12 01:35:18 pause-249141 crio[2078]: time="2025-12-12T01:35:18.978911482Z" level=info msg="Created container 5b492324a19d7368866c313237fbae39b56ac6a28750bbb96d9a4aab8764b945: kube-system/kindnet-5s7pz/kindnet-cni" id=a6406c11-84c7-4fd8-8410-62deb71c95cd name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 01:35:18 pause-249141 crio[2078]: time="2025-12-12T01:35:18.97918007Z" level=info msg="Created container 550a496e315f16d7f7d774db26779f759f5db45901e0f542db49dd51ea30338e: kube-system/kube-scheduler-pause-249141/kube-scheduler" id=1c6efe33-aaaf-4dfd-b000-c72d3caada6a name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 01:35:18 pause-249141 crio[2078]: time="2025-12-12T01:35:18.980203873Z" level=info msg="Starting container: 5b492324a19d7368866c313237fbae39b56ac6a28750bbb96d9a4aab8764b945" id=3a7d4cde-8af2-4582-bb46-21b88008bb86 name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 01:35:18 pause-249141 crio[2078]: time="2025-12-12T01:35:18.983496545Z" level=info msg="Created container 86eb237121dbb40df2f9508434a7b2d0bb0bbb9a812e6b01a538de3d0c1b435a: kube-system/kube-apiserver-pause-249141/kube-apiserver" id=c56f590c-4860-49f7-bd04-098d09919844 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 01:35:18 pause-249141 crio[2078]: time="2025-12-12T01:35:18.983719153Z" level=info msg="Starting container: 550a496e315f16d7f7d774db26779f759f5db45901e0f542db49dd51ea30338e" id=ae2e4cd6-091b-4cae-9c97-b293452b098d name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 01:35:18 pause-249141 crio[2078]: time="2025-12-12T01:35:18.987603876Z" level=info msg="Started container" PID=2398 containerID=5b492324a19d7368866c313237fbae39b56ac6a28750bbb96d9a4aab8764b945 description=kube-system/kindnet-5s7pz/kindnet-cni id=3a7d4cde-8af2-4582-bb46-21b88008bb86 name=/runtime.v1.RuntimeService/StartContainer sandboxID=fdd6456bfcddc8e48cbe3d3163e3e454c7db94d343a0a66481ae6eaebc2b963d
	Dec 12 01:35:18 pause-249141 crio[2078]: time="2025-12-12T01:35:18.989417264Z" level=info msg="Started container" PID=2385 containerID=550a496e315f16d7f7d774db26779f759f5db45901e0f542db49dd51ea30338e description=kube-system/kube-scheduler-pause-249141/kube-scheduler id=ae2e4cd6-091b-4cae-9c97-b293452b098d name=/runtime.v1.RuntimeService/StartContainer sandboxID=8fd836479bfbd7367ad9bd4ffb6c149e50f9f4f5c45f641d678e88874f3f307f
	Dec 12 01:35:19 pause-249141 crio[2078]: time="2025-12-12T01:35:19.002973975Z" level=info msg="Starting container: 86eb237121dbb40df2f9508434a7b2d0bb0bbb9a812e6b01a538de3d0c1b435a" id=17018a8f-eabd-4e5a-81e2-5e1186ea90d4 name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 01:35:19 pause-249141 crio[2078]: time="2025-12-12T01:35:19.034213156Z" level=info msg="Created container 4e079fbf56172a4e7de82ff26b90282dce34661a3ad4a9b1340b61c9cbb4d555: kube-system/kube-proxy-nlvxp/kube-proxy" id=cea791f6-3d99-47a0-8548-16edc8b47c99 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 01:35:19 pause-249141 crio[2078]: time="2025-12-12T01:35:19.038415918Z" level=info msg="Starting container: 4e079fbf56172a4e7de82ff26b90282dce34661a3ad4a9b1340b61c9cbb4d555" id=1b4a59bf-d0f9-4960-83d1-dc45b2dd4e46 name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 01:35:19 pause-249141 crio[2078]: time="2025-12-12T01:35:19.06290652Z" level=info msg="Started container" PID=2390 containerID=86eb237121dbb40df2f9508434a7b2d0bb0bbb9a812e6b01a538de3d0c1b435a description=kube-system/kube-apiserver-pause-249141/kube-apiserver id=17018a8f-eabd-4e5a-81e2-5e1186ea90d4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=86917fb33990b23b6b78e6cc6505a1a5c8f7563a74c4dd0715de2839f1bdb022
	Dec 12 01:35:19 pause-249141 crio[2078]: time="2025-12-12T01:35:19.075748975Z" level=info msg="Started container" PID=2361 containerID=4e079fbf56172a4e7de82ff26b90282dce34661a3ad4a9b1340b61c9cbb4d555 description=kube-system/kube-proxy-nlvxp/kube-proxy id=1b4a59bf-d0f9-4960-83d1-dc45b2dd4e46 name=/runtime.v1.RuntimeService/StartContainer sandboxID=192d1366817c4fb7dee05366d5c73a517c0cb58884369176dfe93d84002b6c5b
	Dec 12 01:35:29 pause-249141 crio[2078]: time="2025-12-12T01:35:29.428827219Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 12 01:35:29 pause-249141 crio[2078]: time="2025-12-12T01:35:29.445502592Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 12 01:35:29 pause-249141 crio[2078]: time="2025-12-12T01:35:29.453416108Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 12 01:35:29 pause-249141 crio[2078]: time="2025-12-12T01:35:29.453544892Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 12 01:35:29 pause-249141 crio[2078]: time="2025-12-12T01:35:29.462249946Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 12 01:35:29 pause-249141 crio[2078]: time="2025-12-12T01:35:29.462439382Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 12 01:35:29 pause-249141 crio[2078]: time="2025-12-12T01:35:29.462533443Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 12 01:35:29 pause-249141 crio[2078]: time="2025-12-12T01:35:29.475967097Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 12 01:35:29 pause-249141 crio[2078]: time="2025-12-12T01:35:29.476162013Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 12 01:35:29 pause-249141 crio[2078]: time="2025-12-12T01:35:29.476244506Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 12 01:35:29 pause-249141 crio[2078]: time="2025-12-12T01:35:29.493818088Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 12 01:35:29 pause-249141 crio[2078]: time="2025-12-12T01:35:29.493853279Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	5b492324a19d7       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   24 seconds ago       Running             kindnet-cni               1                   fdd6456bfcddc       kindnet-5s7pz                          kube-system
	86eb237121dbb       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7   24 seconds ago       Running             kube-apiserver            1                   86917fb33990b       kube-apiserver-pause-249141            kube-system
	550a496e315f1       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949   24 seconds ago       Running             kube-scheduler            1                   8fd836479bfbd       kube-scheduler-pause-249141            kube-system
	06ed94bd38e37       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42   24 seconds ago       Running             etcd                      1                   dbb2472c05101       etcd-pause-249141                      kube-system
	baa32678c2651       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   24 seconds ago       Running             kube-controller-manager   1                   09ad9d32edab6       kube-controller-manager-pause-249141   kube-system
	e1c1abd5f32e6       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   24 seconds ago       Running             coredns                   1                   3c59d8033f714       coredns-66bc5c9577-5jwqj               kube-system
	4e079fbf56172       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786   24 seconds ago       Running             kube-proxy                1                   192d1366817c4       kube-proxy-nlvxp                       kube-system
	c771bc06bce8f       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   37 seconds ago       Exited              coredns                   0                   3c59d8033f714       coredns-66bc5c9577-5jwqj               kube-system
	45aec66ba1df1       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   fdd6456bfcddc       kindnet-5s7pz                          kube-system
	e704c7281fcb7       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786   About a minute ago   Exited              kube-proxy                0                   192d1366817c4       kube-proxy-nlvxp                       kube-system
	595d301c32a6e       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   About a minute ago   Exited              kube-controller-manager   0                   09ad9d32edab6       kube-controller-manager-pause-249141   kube-system
	adb8f76bff1a1       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42   About a minute ago   Exited              etcd                      0                   dbb2472c05101       etcd-pause-249141                      kube-system
	93423cf437a5d       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7   About a minute ago   Exited              kube-apiserver            0                   86917fb33990b       kube-apiserver-pause-249141            kube-system
	3ec4d221661b7       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949   About a minute ago   Exited              kube-scheduler            0                   8fd836479bfbd       kube-scheduler-pause-249141            kube-system
	
	
	==> coredns [c771bc06bce8f82dca79bdb4ca387282bc81275d57a564d32a9a0697a87b2d17] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:50648 - 5096 "HINFO IN 2028248822654866337.4104538514851914123. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021771971s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [e1c1abd5f32e6476b2cf029011d2249f974696e5db5ed40ae5e5686e0bd54717] <==
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45260 - 42576 "HINFO IN 6515863558079566634.7371817655653771039. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.026118041s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "namespaces" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               pause-249141
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-249141
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c04ca15b4c226075dd018d362cd996ac712bf2c0
	                    minikube.k8s.io/name=pause-249141
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_12T01_34_20_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 12 Dec 2025 01:34:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-249141
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 12 Dec 2025 01:35:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 12 Dec 2025 01:35:05 +0000   Fri, 12 Dec 2025 01:34:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 12 Dec 2025 01:35:05 +0000   Fri, 12 Dec 2025 01:34:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 12 Dec 2025 01:35:05 +0000   Fri, 12 Dec 2025 01:34:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 12 Dec 2025 01:35:05 +0000   Fri, 12 Dec 2025 01:35:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-249141
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 78f85184c267cd52312ad0096937f858
	  System UUID:                a5f80442-b367-47c0-9534-e87912c2f944
	  Boot ID:                    cbbb78f6-c2df-4b23-9269-8d5d442bffaa
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-5jwqj                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     79s
	  kube-system                 etcd-pause-249141                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         84s
	  kube-system                 kindnet-5s7pz                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      79s
	  kube-system                 kube-apiserver-pause-249141             250m (12%)    0 (0%)      0 (0%)           0 (0%)         86s
	  kube-system                 kube-controller-manager-pause-249141    200m (10%)    0 (0%)      0 (0%)           0 (0%)         84s
	  kube-system                 kube-proxy-nlvxp                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         79s
	  kube-system                 kube-scheduler-pause-249141             100m (5%)     0 (0%)      0 (0%)           0 (0%)         84s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 78s                kube-proxy       
	  Normal   Starting                 15s                kube-proxy       
	  Warning  CgroupV1                 91s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  91s (x8 over 91s)  kubelet          Node pause-249141 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    91s (x8 over 91s)  kubelet          Node pause-249141 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     91s (x8 over 91s)  kubelet          Node pause-249141 status is now: NodeHasSufficientPID
	  Normal   Starting                 84s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 84s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  84s                kubelet          Node pause-249141 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    84s                kubelet          Node pause-249141 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     84s                kubelet          Node pause-249141 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           80s                node-controller  Node pause-249141 event: Registered Node pause-249141 in Controller
	  Normal   NodeReady                38s                kubelet          Node pause-249141 status is now: NodeReady
	  Normal   RegisteredNode           14s                node-controller  Node pause-249141 event: Registered Node pause-249141 in Controller
	
	
	==> dmesg <==
	[  +3.673413] overlayfs: idmapped layers are currently not supported
	[ +34.404177] overlayfs: idmapped layers are currently not supported
	[Dec12 00:59] overlayfs: idmapped layers are currently not supported
	[Dec12 01:00] overlayfs: idmapped layers are currently not supported
	[  +2.854463] overlayfs: idmapped layers are currently not supported
	[Dec12 01:01] overlayfs: idmapped layers are currently not supported
	[Dec12 01:02] overlayfs: idmapped layers are currently not supported
	[Dec12 01:03] overlayfs: idmapped layers are currently not supported
	[Dec12 01:08] overlayfs: idmapped layers are currently not supported
	[ +34.061772] overlayfs: idmapped layers are currently not supported
	[Dec12 01:09] overlayfs: idmapped layers are currently not supported
	[Dec12 01:11] overlayfs: idmapped layers are currently not supported
	[Dec12 01:12] overlayfs: idmapped layers are currently not supported
	[Dec12 01:13] overlayfs: idmapped layers are currently not supported
	[Dec12 01:14] overlayfs: idmapped layers are currently not supported
	[  +1.592007] overlayfs: idmapped layers are currently not supported
	[Dec12 01:15] overlayfs: idmapped layers are currently not supported
	[ +24.197582] overlayfs: idmapped layers are currently not supported
	[Dec12 01:16] overlayfs: idmapped layers are currently not supported
	[ +26.194679] overlayfs: idmapped layers are currently not supported
	[Dec12 01:17] overlayfs: idmapped layers are currently not supported
	[Dec12 01:18] overlayfs: idmapped layers are currently not supported
	[Dec12 01:21] overlayfs: idmapped layers are currently not supported
	[Dec12 01:22] overlayfs: idmapped layers are currently not supported
	[Dec12 01:34] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [06ed94bd38e372eff5b039f17f5b555d55348a17a2f305a892bbefaccb2dff7d] <==
	{"level":"warn","ts":"2025-12-12T01:35:24.048494Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:35:24.077688Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:35:24.114503Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:35:24.154271Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:35:24.166616Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:35:24.192192Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:35:24.234142Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:35:24.267268Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:35:24.299642Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:35:24.333060Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:35:24.351849Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:35:24.394028Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:35:24.426549Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37434","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:35:24.472428Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37456","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:35:24.509367Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:35:24.524180Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:35:24.544471Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:35:24.579876Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:35:24.609982Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:35:24.635540Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:35:24.730916Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:35:24.747405Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:35:24.771473Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:35:24.792836Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:35:24.910799Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37648","server-name":"","error":"EOF"}
	
	
	==> etcd [adb8f76bff1a17e46695b9272558293767fdfe57f417ac8f86c718a1507b74ce] <==
	{"level":"warn","ts":"2025-12-12T01:34:15.413295Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:34:15.446601Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:34:15.514133Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:34:15.547551Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:34:15.568272Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:34:15.606208Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T01:34:15.688688Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33132","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-12T01:35:09.936817Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-12T01:35:09.936885Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-249141","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	{"level":"error","ts":"2025-12-12T01:35:09.936993Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-12T01:35:10.095499Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-12T01:35:10.095604Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-12T01:35:10.095625Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2025-12-12T01:35:10.095685Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-12-12T01:35:10.095742Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-12T01:35:10.095780Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-12T01:35:10.095791Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-12T01:35:10.095810Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-12-12T01:35:10.095889Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-12T01:35:10.095939Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-12T01:35:10.095974Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-12T01:35:10.099397Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"error","ts":"2025-12-12T01:35:10.099564Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-12T01:35:10.099615Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-12T01:35:10.099643Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-249141","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	
	==> kernel <==
	 01:35:44 up  4:18,  0 user,  load average: 2.42, 1.50, 1.60
	Linux pause-249141 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [45aec66ba1df15580fb0dc430c9ca8833004e5cfc642d8cb7fbac231cbdf9574] <==
	I1212 01:34:25.312967       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1212 01:34:25.313240       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1212 01:34:25.313417       1 main.go:148] setting mtu 1500 for CNI 
	I1212 01:34:25.313439       1 main.go:178] kindnetd IP family: "ipv4"
	I1212 01:34:25.313453       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-12T01:34:25Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1212 01:34:25.526220       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1212 01:34:25.526246       1 controller.go:381] "Waiting for informer caches to sync"
	I1212 01:34:25.526254       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1212 01:34:25.527071       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1212 01:34:55.527250       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1212 01:34:55.527264       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1212 01:34:55.527364       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1212 01:34:55.527434       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1212 01:34:57.026912       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1212 01:34:57.026942       1 metrics.go:72] Registering metrics
	I1212 01:34:57.027061       1 controller.go:711] "Syncing nftables rules"
	I1212 01:35:05.531673       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1212 01:35:05.531733       1 main.go:301] handling current node
	
	
	==> kindnet [5b492324a19d7368866c313237fbae39b56ac6a28750bbb96d9a4aab8764b945] <==
	I1212 01:35:19.239236       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1212 01:35:19.243199       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1212 01:35:19.244159       1 main.go:148] setting mtu 1500 for CNI 
	I1212 01:35:19.249108       1 main.go:178] kindnetd IP family: "ipv4"
	I1212 01:35:19.249175       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-12T01:35:19Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1212 01:35:19.430511       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1212 01:35:19.430552       1 controller.go:381] "Waiting for informer caches to sync"
	I1212 01:35:19.430566       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1212 01:35:19.431462       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1212 01:35:26.530724       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1212 01:35:26.530760       1 metrics.go:72] Registering metrics
	I1212 01:35:26.530891       1 controller.go:711] "Syncing nftables rules"
	I1212 01:35:29.425388       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1212 01:35:29.425429       1 main.go:301] handling current node
	I1212 01:35:39.422745       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1212 01:35:39.422808       1 main.go:301] handling current node
	
	
	==> kube-apiserver [86eb237121dbb40df2f9508434a7b2d0bb0bbb9a812e6b01a538de3d0c1b435a] <==
	I1212 01:35:26.401982       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1212 01:35:26.402651       1 cache.go:39] Caches are synced for autoregister controller
	I1212 01:35:26.408658       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1212 01:35:26.439127       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1212 01:35:26.439553       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1212 01:35:26.439688       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1212 01:35:26.439748       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1212 01:35:26.447430       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1212 01:35:26.447948       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1212 01:35:26.458074       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1212 01:35:26.459139       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1212 01:35:26.459237       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1212 01:35:26.459352       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1212 01:35:26.459396       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1212 01:35:26.459419       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1212 01:35:26.461057       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1212 01:35:26.461114       1 policy_source.go:240] refreshing policies
	E1212 01:35:26.461353       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1212 01:35:26.497047       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1212 01:35:26.776464       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1212 01:35:28.313286       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1212 01:35:29.929430       1 controller.go:667] quota admission added evaluator for: endpoints
	I1212 01:35:29.973861       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1212 01:35:30.008940       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1212 01:35:30.106296       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-apiserver [93423cf437a5df66043e9d8c67f967f73a3aba50c58c120f3210fc19fee0e72a] <==
	W1212 01:35:09.962571       1 logging.go:55] [core] [Channel #9 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:35:09.962612       1 logging.go:55] [core] [Channel #219 SubChannel #221]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:35:09.962652       1 logging.go:55] [core] [Channel #211 SubChannel #213]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:35:09.964158       1 logging.go:55] [core] [Channel #67 SubChannel #69]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:35:09.964233       1 logging.go:55] [core] [Channel #71 SubChannel #73]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:35:09.964282       1 logging.go:55] [core] [Channel #83 SubChannel #85]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:35:09.964332       1 logging.go:55] [core] [Channel #183 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:35:09.964378       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:35:09.964431       1 logging.go:55] [core] [Channel #235 SubChannel #237]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:35:09.964479       1 logging.go:55] [core] [Channel #39 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:35:09.964542       1 logging.go:55] [core] [Channel #115 SubChannel #117]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:35:09.964596       1 logging.go:55] [core] [Channel #151 SubChannel #153]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:35:09.964645       1 logging.go:55] [core] [Channel #159 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:35:09.964693       1 logging.go:55] [core] [Channel #191 SubChannel #193]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:35:09.964746       1 logging.go:55] [core] [Channel #59 SubChannel #61]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:35:09.964820       1 logging.go:55] [core] [Channel #55 SubChannel #57]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:35:09.964859       1 logging.go:55] [core] [Channel #215 SubChannel #217]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:35:09.964895       1 logging.go:55] [core] [Channel #31 SubChannel #33]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:35:09.964933       1 logging.go:55] [core] [Channel #75 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:35:09.964967       1 logging.go:55] [core] [Channel #111 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:35:09.965003       1 logging.go:55] [core] [Channel #119 SubChannel #121]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:35:09.965047       1 logging.go:55] [core] [Channel #131 SubChannel #133]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:35:09.965128       1 logging.go:55] [core] [Channel #79 SubChannel #81]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:35:09.965199       1 logging.go:55] [core] [Channel #43 SubChannel #45]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [595d301c32a6e5bcd377b0b53fca8b556fd8a0a9887c54cb4dbc74fae9bc5012] <==
	I1212 01:34:23.622252       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1212 01:34:23.622662       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-249141"
	I1212 01:34:23.622794       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1212 01:34:23.623144       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1212 01:34:23.623279       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1212 01:34:23.624384       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1212 01:34:23.624478       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1212 01:34:23.624982       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1212 01:34:23.624984       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1212 01:34:23.626875       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1212 01:34:23.627451       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1212 01:34:23.627521       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1212 01:34:23.627533       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1212 01:34:23.631188       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1212 01:34:23.633516       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1212 01:34:23.633606       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1212 01:34:23.633632       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1212 01:34:23.633642       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1212 01:34:23.633648       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1212 01:34:23.642892       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1212 01:34:23.642920       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1212 01:34:23.642928       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1212 01:34:23.644355       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1212 01:34:23.644925       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-249141" podCIDRs=["10.244.0.0/24"]
	I1212 01:35:08.630379       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-controller-manager [baa32678c2651eca75a4f6b1c2685393cdffab03f5c5961c36b3fc1b1a93db68] <==
	I1212 01:35:29.898603       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1212 01:35:29.899426       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1212 01:35:29.893883       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1212 01:35:29.894351       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1212 01:35:29.894370       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1212 01:35:29.894636       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1212 01:35:29.900266       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1212 01:35:29.902793       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1212 01:35:29.903724       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1212 01:35:29.915661       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1212 01:35:29.923534       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1212 01:35:29.936418       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1212 01:35:29.944676       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1212 01:35:29.944792       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1212 01:35:29.944825       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1212 01:35:29.944780       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1212 01:35:29.947262       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1212 01:35:29.952497       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1212 01:35:29.952662       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1212 01:35:29.952765       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-249141"
	I1212 01:35:29.952855       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1212 01:35:29.953310       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1212 01:35:29.954878       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1212 01:35:29.967312       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1212 01:35:29.974445       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-proxy [4e079fbf56172a4e7de82ff26b90282dce34661a3ad4a9b1340b61c9cbb4d555] <==
	I1212 01:35:20.343706       1 server_linux.go:53] "Using iptables proxy"
	I1212 01:35:21.809076       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E1212 01:35:26.424162       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes \"pause-249141\" is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1212 01:35:27.911018       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1212 01:35:27.911052       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1212 01:35:27.911124       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1212 01:35:27.932136       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1212 01:35:27.932200       1 server_linux.go:132] "Using iptables Proxier"
	I1212 01:35:27.936614       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1212 01:35:27.936986       1 server.go:527] "Version info" version="v1.34.2"
	I1212 01:35:27.937010       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 01:35:27.938127       1 config.go:200] "Starting service config controller"
	I1212 01:35:27.938155       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1212 01:35:27.938922       1 config.go:106] "Starting endpoint slice config controller"
	I1212 01:35:27.939014       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1212 01:35:27.939083       1 config.go:403] "Starting serviceCIDR config controller"
	I1212 01:35:27.939117       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1212 01:35:27.941589       1 config.go:309] "Starting node config controller"
	I1212 01:35:27.941668       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1212 01:35:27.941698       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1212 01:35:28.039233       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1212 01:35:28.039239       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1212 01:35:28.039256       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [e704c7281fcb74b9686f6baa06351c26b69648ee741e499cac05da9256535e0a] <==
	I1212 01:34:25.253611       1 server_linux.go:53] "Using iptables proxy"
	I1212 01:34:25.340246       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1212 01:34:25.440752       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1212 01:34:25.441811       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1212 01:34:25.441937       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1212 01:34:25.459705       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1212 01:34:25.459754       1 server_linux.go:132] "Using iptables Proxier"
	I1212 01:34:25.465223       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1212 01:34:25.465534       1 server.go:527] "Version info" version="v1.34.2"
	I1212 01:34:25.465557       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 01:34:25.472360       1 config.go:106] "Starting endpoint slice config controller"
	I1212 01:34:25.472387       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1212 01:34:25.473219       1 config.go:200] "Starting service config controller"
	I1212 01:34:25.473237       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1212 01:34:25.473490       1 config.go:403] "Starting serviceCIDR config controller"
	I1212 01:34:25.473504       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1212 01:34:25.474546       1 config.go:309] "Starting node config controller"
	I1212 01:34:25.474563       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1212 01:34:25.474569       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1212 01:34:25.572801       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1212 01:34:25.573850       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1212 01:34:25.573863       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [3ec4d221661b7f77facbb9727ed33f4f819f2dacde148a40f053f0a805768ac9] <==
	E1212 01:34:16.952758       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1212 01:34:16.952803       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1212 01:34:16.952862       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1212 01:34:16.952927       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1212 01:34:16.952983       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1212 01:34:16.953104       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1212 01:34:16.953155       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1212 01:34:16.953210       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1212 01:34:16.953258       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1212 01:34:16.953309       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1212 01:34:16.953354       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1212 01:34:16.953404       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1212 01:34:16.953460       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1212 01:34:16.953533       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1212 01:34:16.953700       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1212 01:34:16.957973       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1212 01:34:17.842090       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1212 01:34:17.969300       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	I1212 01:34:18.434971       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1212 01:35:09.931786       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1212 01:35:09.931820       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1212 01:35:09.931847       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1212 01:35:09.931877       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1212 01:35:09.931916       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1212 01:35:09.931944       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [550a496e315f16d7f7d774db26779f759f5db45901e0f542db49dd51ea30338e] <==
	I1212 01:35:26.290792       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 01:35:26.318161       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1212 01:35:26.318845       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1212 01:35:26.318938       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1212 01:35:26.361124       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1212 01:35:26.376909       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1212 01:35:26.401745       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1212 01:35:26.401883       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1212 01:35:26.403071       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1212 01:35:26.403228       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1212 01:35:26.403386       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1212 01:35:26.403485       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1212 01:35:26.403544       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1212 01:35:26.407221       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1212 01:35:26.407302       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1212 01:35:26.407368       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1212 01:35:26.407431       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1212 01:35:26.407498       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1212 01:35:26.407560       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1212 01:35:26.407663       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1212 01:35:26.407785       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1212 01:35:26.407852       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1212 01:35:26.407906       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1212 01:35:26.407944       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	I1212 01:35:27.363224       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 12 01:35:18 pause-249141 kubelet[1318]: E1212 01:35:18.724909    1318 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-5jwqj\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="1a666390-6ced-45ed-9bce-50a39727a9f8" pod="kube-system/coredns-66bc5c9577-5jwqj"
	Dec 12 01:35:18 pause-249141 kubelet[1318]: E1212 01:35:18.725059    1318 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-249141\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="6b319ff4ba53ab7887d2c3541575ccb4" pod="kube-system/etcd-pause-249141"
	Dec 12 01:35:18 pause-249141 kubelet[1318]: E1212 01:35:18.725203    1318 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-249141\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="4d7163457b2f692953f8cb8b04ad4f04" pod="kube-system/kube-scheduler-pause-249141"
	Dec 12 01:35:18 pause-249141 kubelet[1318]: E1212 01:35:18.725345    1318 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-249141\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="a4ce20392fdce9bbaaeba656027321a0" pod="kube-system/kube-apiserver-pause-249141"
	Dec 12 01:35:18 pause-249141 kubelet[1318]: E1212 01:35:18.725480    1318 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-249141\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="bbb4bfc784ac6e8df37d02e51f111018" pod="kube-system/kube-controller-manager-pause-249141"
	Dec 12 01:35:18 pause-249141 kubelet[1318]: I1212 01:35:18.728377    1318 scope.go:117] "RemoveContainer" containerID="45aec66ba1df15580fb0dc430c9ca8833004e5cfc642d8cb7fbac231cbdf9574"
	Dec 12 01:35:18 pause-249141 kubelet[1318]: E1212 01:35:18.729005    1318 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-5jwqj\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="1a666390-6ced-45ed-9bce-50a39727a9f8" pod="kube-system/coredns-66bc5c9577-5jwqj"
	Dec 12 01:35:18 pause-249141 kubelet[1318]: E1212 01:35:18.729330    1318 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-249141\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="6b319ff4ba53ab7887d2c3541575ccb4" pod="kube-system/etcd-pause-249141"
	Dec 12 01:35:18 pause-249141 kubelet[1318]: E1212 01:35:18.729623    1318 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-249141\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="4d7163457b2f692953f8cb8b04ad4f04" pod="kube-system/kube-scheduler-pause-249141"
	Dec 12 01:35:18 pause-249141 kubelet[1318]: E1212 01:35:18.729884    1318 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-249141\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="a4ce20392fdce9bbaaeba656027321a0" pod="kube-system/kube-apiserver-pause-249141"
	Dec 12 01:35:18 pause-249141 kubelet[1318]: E1212 01:35:18.730148    1318 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-249141\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="bbb4bfc784ac6e8df37d02e51f111018" pod="kube-system/kube-controller-manager-pause-249141"
	Dec 12 01:35:18 pause-249141 kubelet[1318]: E1212 01:35:18.730390    1318 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nlvxp\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="3ccbdf76-9f38-49c5-b877-2bd169aabd0f" pod="kube-system/kube-proxy-nlvxp"
	Dec 12 01:35:18 pause-249141 kubelet[1318]: E1212 01:35:18.730637    1318 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kindnet-5s7pz\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="99856c46-d02d-48bc-95ee-225a3ead3606" pod="kube-system/kindnet-5s7pz"
	Dec 12 01:35:26 pause-249141 kubelet[1318]: E1212 01:35:26.283024    1318 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-249141\" is forbidden: User \"system:node:pause-249141\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-249141' and this object" podUID="4d7163457b2f692953f8cb8b04ad4f04" pod="kube-system/kube-scheduler-pause-249141"
	Dec 12 01:35:26 pause-249141 kubelet[1318]: E1212 01:35:26.285573    1318 reflector.go:205] "Failed to watch" err="configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:pause-249141\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-249141' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Dec 12 01:35:26 pause-249141 kubelet[1318]: E1212 01:35:26.293223    1318 reflector.go:205] "Failed to watch" err="configmaps \"coredns\" is forbidden: User \"system:node:pause-249141\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-249141' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap"
	Dec 12 01:35:26 pause-249141 kubelet[1318]: E1212 01:35:26.294366    1318 reflector.go:205] "Failed to watch" err="configmaps \"kube-proxy\" is forbidden: User \"system:node:pause-249141\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-249141' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Dec 12 01:35:26 pause-249141 kubelet[1318]: E1212 01:35:26.299148    1318 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-pause-249141\" is forbidden: User \"system:node:pause-249141\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-249141' and this object" podUID="a4ce20392fdce9bbaaeba656027321a0" pod="kube-system/kube-apiserver-pause-249141"
	Dec 12 01:35:26 pause-249141 kubelet[1318]: E1212 01:35:26.318608    1318 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-249141\" is forbidden: User \"system:node:pause-249141\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-249141' and this object" podUID="bbb4bfc784ac6e8df37d02e51f111018" pod="kube-system/kube-controller-manager-pause-249141"
	Dec 12 01:35:26 pause-249141 kubelet[1318]: E1212 01:35:26.339006    1318 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-nlvxp\" is forbidden: User \"system:node:pause-249141\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-249141' and this object" podUID="3ccbdf76-9f38-49c5-b877-2bd169aabd0f" pod="kube-system/kube-proxy-nlvxp"
	Dec 12 01:35:26 pause-249141 kubelet[1318]: E1212 01:35:26.363659    1318 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-5s7pz\" is forbidden: User \"system:node:pause-249141\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-249141' and this object" podUID="99856c46-d02d-48bc-95ee-225a3ead3606" pod="kube-system/kindnet-5s7pz"
	Dec 12 01:35:26 pause-249141 kubelet[1318]: E1212 01:35:26.376245    1318 status_manager.go:1018] "Failed to get status for pod" err="pods \"coredns-66bc5c9577-5jwqj\" is forbidden: User \"system:node:pause-249141\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-249141' and this object" podUID="1a666390-6ced-45ed-9bce-50a39727a9f8" pod="kube-system/coredns-66bc5c9577-5jwqj"
	Dec 12 01:35:37 pause-249141 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 12 01:35:37 pause-249141 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 12 01:35:37 pause-249141 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-249141 -n pause-249141
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-249141 -n pause-249141: exit status 2 (655.350155ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context pause-249141 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (8.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (7200.128s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1212 02:09:09.635451  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/auto-408737/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1212 02:09:35.373289  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/kindnet-408737/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1212 02:09:50.365535  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/calico-408737/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 02:09:50.372135  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/calico-408737/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 02:09:50.383570  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/calico-408737/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 02:09:50.405040  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/calico-408737/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 02:09:50.446413  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/calico-408737/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 02:09:50.527908  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/calico-408737/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1212 02:09:50.689559  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/calico-408737/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 02:09:51.011189  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/calico-408737/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1212 02:09:51.653396  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/calico-408737/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1212 02:09:52.935735  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/calico-408737/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1212 02:10:00.621213  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/calico-408737/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1212 02:10:10.864353  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/calico-408737/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1212 02:10:31.345834  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/calico-408737/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
panic: test timed out after 2h0m0s
	running tests:
		TestNetworkPlugins (34m56s)
		TestNetworkPlugins/group/bridge (28s)
		TestNetworkPlugins/group/bridge/Start (28s)
		TestStartStop (36m57s)
		TestStartStop/group/no-preload (27m32s)
		TestStartStop/group/no-preload/serial (27m32s)
		TestStartStop/group/no-preload/serial/AddonExistsAfterStop (1m37s)

                                                
                                                
goroutine 6278 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2682 +0x2b0
created by time.goFunc
	/usr/local/go/src/time/sleep.go:215 +0x38

                                                
                                                
goroutine 1 [chan receive, 30 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1891 +0x3d0
testing.tRunner(0x40004c48c0, 0x4000857bb8)
	/usr/local/go/src/testing/testing.go:1940 +0x104
testing.runTests(0x4000692018, {0x534c680, 0x2c, 0x2c}, {0x4000857d08?, 0x125774?, 0x5375080?})
	/usr/local/go/src/testing/testing.go:2475 +0x3b8
testing.(*M).Run(0x400075e460)
	/usr/local/go/src/testing/testing.go:2337 +0x530
k8s.io/minikube/test/integration.TestMain(0x400075e460)
	/home/jenkins/workspace/Build_Cross/test/integration/main_test.go:64 +0xf0
main.main()
	_testmain.go:133 +0x88

                                                
                                                
goroutine 185 [chan receive, 117 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x4004f491a0, 0x40000821c0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 177
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 3722 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3721
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 5420 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e69b0, 0x40000821c0}, 0x400009e740, 0x400009e788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e69b0, 0x40000821c0}, 0x2c?, 0x400009e740, 0x400009e788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e69b0?, 0x40000821c0?}, 0x0?, 0x400009e750?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x36f42d0?, 0x40001bc080?, 0x40015bfb00?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 5416
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 4793 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff660, {{0x36f42d0, 0x40001bc080?}, 0x4001568380?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 4792
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 3883 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e69b0, 0x40000821c0}, 0x40013dd740, 0x400087af88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e69b0, 0x40000821c0}, 0x57?, 0x40013dd740, 0x40013dd788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e69b0?, 0x40000821c0?}, 0x0?, 0x40013dd750?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x36f42d0?, 0x40001bc080?, 0x40014ec600?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 3851
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 3850 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff660, {{0x36f42d0, 0x40001bc080?}, 0x40014ec600?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3878
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 5123 [select, 1 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 5122
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 837 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e69b0, 0x40000821c0}, 0x40014c8f40, 0x40014c8f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e69b0, 0x40000821c0}, 0xc0?, 0x40014c8f40, 0x40014c8f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e69b0?, 0x40000821c0?}, 0x3164366661623031?, 0x6238653132363937?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x4000457980?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 831
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 4464 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff660, {{0x36f42d0, 0x40001bc080?}, 0x400023c780?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 4460
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 3185 [chan receive, 34 minutes]:
testing.(*T).Run(0x40018648c0, {0x296d71f?, 0xd0a5e1c04cd?}, 0x4001556fc0)
	/usr/local/go/src/testing/testing.go:2005 +0x378
k8s.io/minikube/test/integration.TestNetworkPlugins(0x40018648c0)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:52 +0xe4
testing.tRunner(0x40018648c0, 0x339baf0)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 184 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff660, {{0x36f42d0, 0x40001bc080?}, 0x40013d3880?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 177
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 168 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 167
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 167 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e69b0, 0x40000821c0}, 0x400009e740, 0x4001332f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e69b0, 0x40000821c0}, 0x98?, 0x400009e740, 0x400009e788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e69b0?, 0x40000821c0?}, 0x0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x4000456480?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 185
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 166 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0x400083aad0, 0x2d)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x400083aac0)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702b60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x4004f491a0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x4001509dc0?, 0x0?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e69b0?, 0x40000821c0?}, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e69b0, 0x40000821c0}, 0x4000878f38, {0x369e520, 0x40015ea360}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x0?, {0x369e520?, 0x40015ea360?}, 0x0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x40003288e0, 0x3b9aca00, 0x0, 0x1, 0x40000821c0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 185
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 1054 [chan send, 110 minutes]:
os/exec.(*Cmd).watchCtx(0x4001a13b00, 0x40020b9110)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 763
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 830 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff660, {{0x36f42d0, 0x40001bc080?}, 0x4000456c00?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 829
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 5762 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff660, {{0x36f42d0, 0x40001bc080?}, 0x4001615880?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 5724
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 1886 [chan send, 80 minutes]:
os/exec.(*Cmd).watchCtx(0x400023c780, 0x4001548690)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 1885
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 2018 [chan send, 80 minutes]:
os/exec.(*Cmd).watchCtx(0x40014ec780, 0x400192cee0)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 1449
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 1171 [select, 110 minutes]:
net/http.(*persistConn).writeLoop(0x4001618b40)
	/usr/local/go/src/net/http/transport.go:2600 +0x94
created by net/http.(*Transport).dialConn in goroutine 1152
	/usr/local/go/src/net/http/transport.go:1948 +0x1164

                                                
                                                
goroutine 831 [chan receive, 112 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x400214b080, 0x40000821c0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 829
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 1518 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0x400188ca90, 0x24)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x400188ca80)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702b60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x400181e420)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x400022d0a0?, 0x0?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e69b0?, 0x40000821c0?}, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e69b0, 0x40000821c0}, 0x4001537f38, {0x369e520, 0x4004eb5e00}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x36f42d0?, {0x369e520?, 0x4004eb5e00?}, 0x50?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x400070dcb0, 0x3b9aca00, 0x0, 0x1, 0x40000821c0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 1504
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 5105 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff660, {{0x36f42d0, 0x40001bc080?}, 0x40013d2fc0?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 5088
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 3851 [chan receive, 30 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x400161cc00, 0x40000821c0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3878
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 5421 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 5420
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 1271 [IO wait, 110 minutes]:
internal/poll.runtime_pollWait(0xffff466d1800, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0xa0
internal/poll.(*pollDesc).wait(0x4001881e80?, 0xdbd0c?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0x4001881e80)
	/usr/local/go/src/internal/poll/fd_unix.go:613 +0x21c
net.(*netFD).accept(0x4001881e80)
	/usr/local/go/src/net/fd_unix.go:161 +0x28
net.(*TCPListener).accept(0x400215f800)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x24
net.(*TCPListener).Accept(0x400215f800)
	/usr/local/go/src/net/tcpsock.go:380 +0x2c
net/http.(*Server).Serve(0x40004caf00, {0x36d4000, 0x400215f800})
	/usr/local/go/src/net/http/server.go:3463 +0x24c
net/http.(*Server).ListenAndServe(0x40004caf00)
	/usr/local/go/src/net/http/server.go:3389 +0x80
k8s.io/minikube/test/integration.startHTTPProxy.func1(...)
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2218
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 1269
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2217 +0x104

                                                
                                                
goroutine 1170 [select, 110 minutes]:
net/http.(*persistConn).readLoop(0x4001618b40)
	/usr/local/go/src/net/http/transport.go:2398 +0xa6c
created by net/http.(*Transport).dialConn in goroutine 1152
	/usr/local/go/src/net/http/transport.go:1947 +0x111c

                                                
                                                
goroutine 3738 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff660, {{0x36f42d0, 0x40001bc080?}, 0x40014ec480?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3734
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 4801 [sync.Cond.Wait, 1 minutes]:
sync.runtime_notifyListWait(0x40015d7750, 0xf)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x40015d7740)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702b60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x4004f48900)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x4001741500?, 0x1618bc?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e69b0?, 0x40000821c0?}, 0x40000a0ea8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e69b0, 0x40000821c0}, 0x40013a8f38, {0x369e520, 0x4001643f20}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x11?, {0x369e520?, 0x4001643f20?}, 0x30?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4004e78d20, 0x3b9aca00, 0x0, 0x1, 0x40000821c0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 4794
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 995 [chan send, 110 minutes]:
os/exec.(*Cmd).watchCtx(0x4000456300, 0x400192d420)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 994
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 5106 [chan receive, 6 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x400161d5c0, 0x40000821c0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 5088
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 3557 [chan receive]:
testing.(*T).Run(0x4001819dc0, {0x296d724?, 0x368adf0?}, 0x400167ee40)
	/usr/local/go/src/testing/testing.go:2005 +0x378
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0x4001819dc0)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:111 +0x4f4
testing.tRunner(0x4001819dc0, 0x4001880b80)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 3494
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 3721 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e69b0, 0x40000821c0}, 0x4001534f40, 0x4001534f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e69b0, 0x40000821c0}, 0xf0?, 0x4001534f40, 0x4001534f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e69b0?, 0x40000821c0?}, 0x4001af0300?, 0x4004ea9b80?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x400023d200?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 3739
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 643 [IO wait, 114 minutes]:
internal/poll.runtime_pollWait(0xffff466d1400, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0xa0
internal/poll.(*pollDesc).wait(0x400044c800?, 0x2d970?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0x400044c800)
	/usr/local/go/src/internal/poll/fd_unix.go:613 +0x21c
net.(*netFD).accept(0x400044c800)
	/usr/local/go/src/net/fd_unix.go:161 +0x28
net.(*TCPListener).accept(0x40007c5180)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x24
net.(*TCPListener).Accept(0x40007c5180)
	/usr/local/go/src/net/tcpsock.go:380 +0x2c
net/http.(*Server).Serve(0x4000104a00, {0x36d4000, 0x40007c5180})
	/usr/local/go/src/net/http/server.go:3463 +0x24c
net/http.(*Server).ListenAndServe(0x4000104a00)
	/usr/local/go/src/net/http/server.go:3389 +0x80
k8s.io/minikube/test/integration.startHTTPProxy.func1(...)
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2218
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 641
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2217 +0x104

                                                
                                                
goroutine 6014 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff660, {{0x36f42d0, 0x40001bc080?}, 0x40013d3dc0?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 6013
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 5763 [chan receive, 2 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x4001ac0ea0, 0x40000821c0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 5724
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 836 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0x40002bd990, 0x2c)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x40002bd980)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702b60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x400214b080)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x400165a270?, 0x1618bc?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e69b0?, 0x40000821c0?}, 0x40013deea8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e69b0, 0x40000821c0}, 0x400084ff38, {0x369e520, 0x40016521e0}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x40013defa8?, {0x369e520?, 0x40016521e0?}, 0x20?, 0x6d20303639313934?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x400147a0d0, 0x3b9aca00, 0x0, 0x1, 0x40000821c0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 831
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 838 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 837
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 1067 [chan send, 110 minutes]:
os/exec.(*Cmd).watchCtx(0x4001af1680, 0x4001509340)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 1066
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 5416 [chan receive, 4 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x4000784180, 0x40000821c0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 5411
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 5122 [select, 1 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e69b0, 0x40000821c0}, 0x40015def40, 0x40015def88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e69b0, 0x40000821c0}, 0x65?, 0x40015def40, 0x40015def88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e69b0?, 0x40000821c0?}, 0x0?, 0x40015def50?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x36f42d0?, 0x40001bc080?, 0x40013d2fc0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 5106
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 6218 [syscall]:
syscall.Syscall6(0x5f, 0x3, 0xf, 0x40014cac38, 0x4, 0x4000724090, 0x0)
	/usr/local/go/src/syscall/syscall_linux.go:96 +0x2c
internal/syscall/unix.Waitid(0x40014cad98?, 0x1929a0?, 0xffffdf3db19e?, 0x0?, 0x4001ac80c0?)
	/usr/local/go/src/internal/syscall/unix/waitid_linux.go:18 +0x44
os.(*Process).pidfdWait.func1(...)
	/usr/local/go/src/os/pidfd_linux.go:109
os.ignoringEINTR(...)
	/usr/local/go/src/os/file_posix.go:256
os.(*Process).pidfdWait(0x400215e700)
	/usr/local/go/src/os/pidfd_linux.go:108 +0x144
os.(*Process).wait(0x40014cad68?)
	/usr/local/go/src/os/exec_unix.go:25 +0x24
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:340
os/exec.(*Cmd).Wait(0x4000456300)
	/usr/local/go/src/os/exec/exec.go:922 +0x38
os/exec.(*Cmd).Run(0x4000456300)
	/usr/local/go/src/os/exec/exec.go:626 +0x38
k8s.io/minikube/test/integration.Run(0x4001864000, 0x4000456300)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:104 +0x154
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1.1(0x4001864000)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:112 +0x44
testing.tRunner(0x4001864000, 0x400167ee40)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 3557
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 4306 [select, 1 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e69b0, 0x40000821c0}, 0x4001532f40, 0x4001532f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e69b0, 0x40000821c0}, 0x58?, 0x4001532f40, 0x4001532f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e69b0?, 0x40000821c0?}, 0x4000327180?, 0x4000630280?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x400023cc00?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 4293
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 4480 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 4479
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 4293 [chan receive, 10 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x400214b0e0, 0x40000821c0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 4291
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 6015 [chan receive, 1 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x4001ac0120, 0x40000821c0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 6013
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 1504 [chan receive, 82 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x400181e420, 0x40000821c0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 1502
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 3494 [chan receive, 10 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1891 +0x3d0
testing.tRunner(0x4001819340, 0x4001556fc0)
	/usr/local/go/src/testing/testing.go:1940 +0x104
created by testing.(*T).Run in goroutine 3185
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 1503 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff660, {{0x36f42d0, 0x40001bc080?}, 0x40014ec480?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 1502
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 4479 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e69b0, 0x40000821c0}, 0x40013d9f40, 0x40013d9f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e69b0, 0x40000821c0}, 0x28?, 0x40013d9f40, 0x40013d9f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e69b0?, 0x40000821c0?}, 0x40013d6a80?, 0x40003eadc0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x4001490c00?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 4481
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 1520 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 1519
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 1519 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e69b0, 0x40000821c0}, 0x4001464740, 0x4001333f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e69b0, 0x40000821c0}, 0x18?, 0x4001464740, 0x4001464788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e69b0?, 0x40000821c0?}, 0x161f90?, 0x40014c76c0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x400023cf00?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 1504
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 4802 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e69b0, 0x40000821c0}, 0x40013d8f40, 0x40013d8f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e69b0, 0x40000821c0}, 0xe9?, 0x40013d8f40, 0x40013d8f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e69b0?, 0x40000821c0?}, 0x0?, 0x40013d8f50?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x36f42d0?, 0x40001bc080?, 0x4001568380?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 4794
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 3435 [chan receive, 10 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1891 +0x3d0
testing.tRunner(0x4001818380, 0x339bd20)
	/usr/local/go/src/testing/testing.go:1940 +0x104
created by testing.(*T).Run in goroutine 3230
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 3884 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3883
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 4292 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff660, {{0x36f42d0, 0x40001bc080?}, 0x40013d2a80?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 4291
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 4481 [chan receive, 10 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x40013fde00, 0x40000821c0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 4460
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 3739 [chan receive, 32 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x400189ed20, 0x40000821c0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3734
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 1915 [chan send, 80 minutes]:
os/exec.(*Cmd).watchCtx(0x40014ecd80, 0x4001699c70)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 1914
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 5952 [select]:
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x36e65a8, 0x400060fef0}, {0x36d4660, 0x400149d7e0}, 0x1, 0x0, 0x4001343b00)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/loop.go:66 +0x158
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x36e6618?, 0x40003265b0?}, 0x3b9aca00, 0x4001343d28?, 0x1, 0x4001343b00)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:48 +0x8c
k8s.io/minikube/test/integration.PodWait({0x36e6618, 0x40003265b0}, 0x4001818fc0, {0x400067ab70, 0x11}, {0x29941e1, 0x14}, {0x29ac150, 0x1c}, 0x7dba821800)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:380 +0x22c
k8s.io/minikube/test/integration.validateAddonAfterStop({0x36e6618, 0x40003265b0}, 0x4001818fc0, {0x400067ab70, 0x11}, {0x29786f9?, 0x245bb75e00161e84?}, {0x693b7943?, 0x40000d4f58?}, {0x161f08?, ...})
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:285 +0xd4
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0x4001818fc0?)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:154 +0x44
testing.tRunner(0x4001818fc0, 0x4001734000)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 4074
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 4307 [select, 1 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 4306
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 3720 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0x400215e890, 0x17)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x400215e880)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702b60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x400189ed20)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x400165f710?, 0x1618bc?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e69b0?, 0x40000821c0?}, 0x40013d96a8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e69b0, 0x40000821c0}, 0x4001535f38, {0x369e520, 0x40015f8060}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x36f42d0?, {0x369e520?, 0x40015f8060?}, 0xe0?, 0x161f90?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4004e78030, 0x3b9aca00, 0x0, 0x1, 0x40000821c0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 3739
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 3439 [chan receive, 27 minutes]:
testing.(*T).Run(0x4001818a80, {0x296eb91?, 0x0?}, 0x400015b600)
	/usr/local/go/src/testing/testing.go:2005 +0x378
k8s.io/minikube/test/integration.TestStartStop.func1.1(0x4001818a80)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:128 +0x7e4
testing.tRunner(0x4001818a80, 0x400215e340)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 3435
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 4794 [chan receive, 8 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x4004f48900, 0x40000821c0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 4792
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 3230 [chan receive, 38 minutes]:
testing.(*T).Run(0x4001865500, {0x296d71f?, 0x40000d6f58?}, 0x339bd20)
	/usr/local/go/src/testing/testing.go:2005 +0x378
k8s.io/minikube/test/integration.TestStartStop(0x4001865500)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:46 +0x3c
testing.tRunner(0x4001865500, 0x339bb38)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 5419 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0x400188cbd0, 0x0)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x400188cbc0)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702b60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x4000784180)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x4001741f10?, 0x1618bc?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e69b0?, 0x40000821c0?}, 0x40012ffea8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e69b0, 0x40000821c0}, 0x40000d3f38, {0x369e520, 0x4004f3a0c0}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x11?, {0x369e520?, 0x4004f3a0c0?}, 0x30?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4004edc230, 0x3b9aca00, 0x0, 0x1, 0x40000821c0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 5416
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 3882 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0x400215e7d0, 0x16)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x400215e7c0)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702b60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x400161cc00)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x40016985b0?, 0x36e65a8?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e69b0?, 0x40000821c0?}, 0x40014646f8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e69b0, 0x40000821c0}, 0x40013aff38, {0x369e520, 0x40018953e0}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x0?, {0x369e520?, 0x40018953e0?}, 0x0?, 0x40014ec180?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4004e78fe0, 0x3b9aca00, 0x0, 0x1, 0x40000821c0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 3851
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 4478 [sync.Cond.Wait, 1 minutes]:
sync.runtime_notifyListWait(0x40007c58d0, 0x10)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x40007c58c0)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702b60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x40013fde00)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x400192d340?, 0x21dd4?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e69b0?, 0x40000821c0?}, 0x40015e06b8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e69b0, 0x40000821c0}, 0x4001334f38, {0x369e520, 0x400166c630}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x11?, {0x369e520?, 0x400166c630?}, 0x1?, 0x36e6618?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x400154cd40, 0x3b9aca00, 0x0, 0x1, 0x40000821c0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 4481
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 4074 [chan receive, 1 minutes]:
testing.(*T).Run(0x40015681c0, {0x2994231?, 0x40000006ee?}, 0x4001734000)
	/usr/local/go/src/testing/testing.go:2005 +0x378
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0x40015681c0)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:153 +0x1b8
testing.tRunner(0x40015681c0, 0x400015b600)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 3439
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 4305 [sync.Cond.Wait, 1 minutes]:
sync.runtime_notifyListWait(0x400215e690, 0x2)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x400215e680)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702b60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x400214b0e0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x40015de718?, 0x1618bc?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e69b0?, 0x40000821c0?}, 0x40015de6a8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e69b0, 0x40000821c0}, 0x4000853f38, {0x369e520, 0x40016540c0}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x36f42d0?, {0x369e520?, 0x40016540c0?}, 0x50?, 0x4000456480?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4004edc090, 0x3b9aca00, 0x0, 0x1, 0x40000821c0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 4293
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 5121 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0x40015d6190, 0xe)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x40015d6180)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702b60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x400161d5c0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x40015e2e88?, 0x2a0ac?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e69b0?, 0x40000821c0?}, 0xffff8d15d108?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e69b0, 0x40000821c0}, 0x4000874f38, {0x369e520, 0x4001624fc0}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x0?, {0x369e520?, 0x4001624fc0?}, 0xb0?, 0x400023ca80?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4004edc820, 0x3b9aca00, 0x0, 0x1, 0x40000821c0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 5106
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 4803 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 4802
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 5415 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff660, {{0x36f42d0, 0x40001bc080?}, 0x40015bfb00?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 5411
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 5766 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0x400215e4d0, 0x0)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x400215e4c0)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702b60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x4001ac0ea0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x400023f650?, 0x0?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e69b0?, 0x40000821c0?}, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e69b0, 0x40000821c0}, 0x400084ef38, {0x369e520, 0x4001a66600}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x36f42d0?, {0x369e520?, 0x4001a66600?}, 0xc0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x400283c7c0, 0x3b9aca00, 0x0, 0x1, 0x40000821c0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 5763
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 5767 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e69b0, 0x40000821c0}, 0x40012fff40, 0x40012fff88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e69b0, 0x40000821c0}, 0x0?, 0x40012fff40, 0x40012fff88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e69b0?, 0x40000821c0?}, 0x400023d680?, 0x4004ea8640?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x40000714a0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 5763
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 5768 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 5767
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 6033 [sync.Cond.Wait, 1 minutes]:
sync.runtime_notifyListWait(0x400215f390, 0x0)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x400215f380)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702b60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x4001ac0120)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x4001666a80?, 0x1618bc?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e69b0?, 0x40000821c0?}, 0x40014626a8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e69b0, 0x40000821c0}, 0x4000850f38, {0x369e520, 0x40007206c0}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x11?, {0x369e520?, 0x40007206c0?}, 0x50?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x400283df10, 0x3b9aca00, 0x0, 0x1, 0x40000821c0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 6015
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 6034 [select, 1 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e69b0, 0x40000821c0}, 0x40012fdf40, 0x40012fdf88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e69b0, 0x40000821c0}, 0xd2?, 0x40012fdf40, 0x40012fdf88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e69b0?, 0x40000821c0?}, 0x0?, 0x40012fdf50?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x36f42d0?, 0x40001bc080?, 0x40013d3dc0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 6015
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 6035 [select, 1 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 6034
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 6221 [select]:
os/exec.(*Cmd).watchCtx(0x4000456300, 0x4001cfc4d0)
	/usr/local/go/src/os/exec/exec.go:789 +0x70
created by os/exec.(*Cmd).Start in goroutine 6218
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 6219 [IO wait]:
internal/poll.runtime_pollWait(0xffff466d1000, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0xa0
internal/poll.(*pollDesc).wait(0x4000731b60?, 0x40017f3278?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0x4000731b60, {0x40017f3278, 0x588, 0x588})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x1e0
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0x40007120b8, {0x40017f3278?, 0x4001463568?, 0x8b27c?})
	/usr/local/go/src/os/file.go:144 +0x68
bytes.(*Buffer).ReadFrom(0x400167f1d0, {0x369c8e8, 0x40004941b0})
	/usr/local/go/src/bytes/buffer.go:217 +0x90
io.copyBuffer({0x369cae0, 0x400167f1d0}, {0x369c8e8, 0x40004941b0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x14c
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x40007120b8?, {0x369cae0, 0x400167f1d0})
	/usr/local/go/src/os/file.go:295 +0x58
os.(*File).WriteTo(0x40007120b8, {0x369cae0, 0x400167f1d0})
	/usr/local/go/src/os/file.go:273 +0x9c
io.copyBuffer({0x369cae0, 0x400167f1d0}, {0x369c968, 0x40007120b8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x98
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x40
os/exec.(*Cmd).Start.func2(0x4001864000?)
	/usr/local/go/src/os/exec/exec.go:749 +0x30
created by os/exec.(*Cmd).Start in goroutine 6218
	/usr/local/go/src/os/exec/exec.go:748 +0x6a4

                                                
                                                
goroutine 6220 [IO wait]:
internal/poll.runtime_pollWait(0xffff462eb800, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0xa0
internal/poll.(*pollDesc).wait(0x4001ac0180?, 0x400151c89d?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0x4001ac0180, {0x400151c89d, 0xd763, 0xd763})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x1e0
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0x40007120e8, {0x400151c89d?, 0x40012fcd68?, 0x8b27c?})
	/usr/local/go/src/os/file.go:144 +0x68
bytes.(*Buffer).ReadFrom(0x400167f200, {0x369c8e8, 0x40004941c8})
	/usr/local/go/src/bytes/buffer.go:217 +0x90
io.copyBuffer({0x369cae0, 0x400167f200}, {0x369c8e8, 0x40004941c8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x14c
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x40007120e8?, {0x369cae0, 0x400167f200})
	/usr/local/go/src/os/file.go:295 +0x58
os.(*File).WriteTo(0x40007120e8, {0x369cae0, 0x400167f200})
	/usr/local/go/src/os/file.go:273 +0x9c
io.copyBuffer({0x369cae0, 0x400167f200}, {0x369c968, 0x40007120e8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x98
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x40
os/exec.(*Cmd).Start.func2(0x4001865880?)
	/usr/local/go/src/os/exec/exec.go:749 +0x30
created by os/exec.(*Cmd).Start in goroutine 6218
	/usr/local/go/src/os/exec/exec.go:748 +0x6a4

                                                
                                    

Test pass (240/316)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 6.7
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.09
9 TestDownloadOnly/v1.28.0/DeleteAll 0.21
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.2/json-events 4.98
13 TestDownloadOnly/v1.34.2/preload-exists 0
17 TestDownloadOnly/v1.34.2/LogsDuration 0.09
18 TestDownloadOnly/v1.34.2/DeleteAll 0.21
19 TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds 0.14
21 TestDownloadOnly/v1.35.0-beta.0/json-events 6.64
22 TestDownloadOnly/v1.35.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.35.0-beta.0/LogsDuration 0.08
27 TestDownloadOnly/v1.35.0-beta.0/DeleteAll 0.22
28 TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds 0.14
30 TestBinaryMirror 0.6
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
36 TestAddons/Setup 146.94
40 TestAddons/serial/GCPAuth/Namespaces 0.19
41 TestAddons/serial/GCPAuth/FakeCredentials 9.89
57 TestAddons/StoppedEnableDisable 12.67
58 TestCertOptions 36.55
59 TestCertExpiration 246.88
61 TestForceSystemdFlag 39.43
62 TestForceSystemdEnv 41.64
67 TestErrorSpam/setup 33.3
68 TestErrorSpam/start 0.79
69 TestErrorSpam/status 1.05
70 TestErrorSpam/pause 6.69
71 TestErrorSpam/unpause 5.58
72 TestErrorSpam/stop 1.53
75 TestFunctional/serial/CopySyncFile 0
76 TestFunctional/serial/StartWithProxy 78.38
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 25.24
79 TestFunctional/serial/KubeContext 0.06
80 TestFunctional/serial/KubectlGetPods 0.09
83 TestFunctional/serial/CacheCmd/cache/add_remote 3.59
84 TestFunctional/serial/CacheCmd/cache/add_local 1.25
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
86 TestFunctional/serial/CacheCmd/cache/list 0.06
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.86
89 TestFunctional/serial/CacheCmd/cache/delete 0.12
90 TestFunctional/serial/MinikubeKubectlCmd 0.14
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.15
92 TestFunctional/serial/ExtraConfig 31.23
93 TestFunctional/serial/ComponentHealth 0.11
94 TestFunctional/serial/LogsCmd 1.46
95 TestFunctional/serial/LogsFileCmd 1.47
96 TestFunctional/serial/InvalidService 4.41
98 TestFunctional/parallel/ConfigCmd 0.48
99 TestFunctional/parallel/DashboardCmd 13.14
100 TestFunctional/parallel/DryRun 0.6
101 TestFunctional/parallel/InternationalLanguage 0.26
102 TestFunctional/parallel/StatusCmd 1.14
106 TestFunctional/parallel/ServiceCmdConnect 8.59
107 TestFunctional/parallel/AddonsCmd 0.14
108 TestFunctional/parallel/PersistentVolumeClaim 21.94
110 TestFunctional/parallel/SSHCmd 0.71
111 TestFunctional/parallel/CpCmd 2.43
113 TestFunctional/parallel/FileSync 0.37
114 TestFunctional/parallel/CertSync 2.19
118 TestFunctional/parallel/NodeLabels 0.11
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.81
122 TestFunctional/parallel/License 0.31
124 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.65
125 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
127 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.46
128 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
129 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
133 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
134 TestFunctional/parallel/ServiceCmd/DeployApp 8.23
135 TestFunctional/parallel/ProfileCmd/profile_not_create 0.43
136 TestFunctional/parallel/ProfileCmd/profile_list 0.41
137 TestFunctional/parallel/ProfileCmd/profile_json_output 0.43
138 TestFunctional/parallel/MountCmd/any-port 7.42
139 TestFunctional/parallel/ServiceCmd/List 0.51
140 TestFunctional/parallel/ServiceCmd/JSONOutput 0.56
141 TestFunctional/parallel/ServiceCmd/HTTPS 0.48
142 TestFunctional/parallel/ServiceCmd/Format 0.4
143 TestFunctional/parallel/ServiceCmd/URL 0.39
144 TestFunctional/parallel/MountCmd/specific-port 2.2
145 TestFunctional/parallel/MountCmd/VerifyCleanup 2.4
146 TestFunctional/parallel/Version/short 0.07
147 TestFunctional/parallel/Version/components 0.65
148 TestFunctional/parallel/ImageCommands/ImageListShort 0.3
149 TestFunctional/parallel/ImageCommands/ImageListTable 0.29
150 TestFunctional/parallel/ImageCommands/ImageListJson 0.28
151 TestFunctional/parallel/ImageCommands/ImageListYaml 0.26
152 TestFunctional/parallel/ImageCommands/ImageBuild 4.06
153 TestFunctional/parallel/ImageCommands/Setup 0.67
154 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.4
155 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.97
156 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.25
157 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.4
158 TestFunctional/parallel/ImageCommands/ImageRemove 0.61
159 TestFunctional/parallel/UpdateContextCmd/no_changes 0.22
160 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.23
161 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.22
162 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.8
163 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.57
164 TestFunctional/delete_echo-server_images 0.04
165 TestFunctional/delete_my-image_image 0.02
166 TestFunctional/delete_minikube_cached_images 0.01
170 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile 0
172 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog 0
174 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext 0.05
178 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote 3.53
179 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local 1.13
180 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete 0.06
181 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list 0.05
182 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node 0.29
183 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload 1.81
184 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete 0.13
189 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd 0.96
190 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd 0.98
193 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd 0.43
195 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun 0.44
196 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage 0.2
202 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd 0.16
205 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd 0.72
206 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd 2.26
208 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync 0.27
209 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync 1.7
215 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled 0.56
217 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License 0.29
220 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel 0
227 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel 0.1
234 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create 0.4
235 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list 0.41
236 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output 0.41
238 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port 1.21
239 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup 1.27
240 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short 0.05
241 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components 0.52
242 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort 0.24
243 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable 0.23
244 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson 0.22
245 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml 0.25
246 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild 3.77
247 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup 0.27
248 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon 1.2
249 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon 0.84
250 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon 1.07
251 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile 0.37
252 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove 0.56
253 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile 0.76
254 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon 0.43
255 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes 0.16
256 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster 0.15
257 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters 0.15
258 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images 0.04
259 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image 0.02
260 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images 0.02
264 TestMultiControlPlane/serial/StartCluster 185.66
265 TestMultiControlPlane/serial/DeployApp 7.89
266 TestMultiControlPlane/serial/PingHostFromPods 1.49
267 TestMultiControlPlane/serial/AddWorkerNode 60.97
268 TestMultiControlPlane/serial/NodeLabels 0.1
269 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.07
270 TestMultiControlPlane/serial/CopyFile 20.37
271 TestMultiControlPlane/serial/StopSecondaryNode 12.87
272 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.83
273 TestMultiControlPlane/serial/RestartSecondaryNode 20.42
274 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.36
275 TestMultiControlPlane/serial/RestartClusterKeepsNodes 126.59
276 TestMultiControlPlane/serial/DeleteSecondaryNode 11.57
277 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.79
278 TestMultiControlPlane/serial/StopCluster 36.05
279 TestMultiControlPlane/serial/RestartCluster 71.62
280 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.81
281 TestMultiControlPlane/serial/AddSecondaryNode 81.78
282 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.1
287 TestJSONOutput/start/Command 81.2
288 TestJSONOutput/start/Audit 0
290 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
291 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
294 TestJSONOutput/pause/Audit 0
296 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
297 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
300 TestJSONOutput/unpause/Audit 0
302 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
303 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
305 TestJSONOutput/stop/Command 5.85
306 TestJSONOutput/stop/Audit 0
308 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
309 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
310 TestErrorJSONOutput 0.24
312 TestKicCustomNetwork/create_custom_network 38.28
313 TestKicCustomNetwork/use_default_bridge_network 35.55
314 TestKicExistingNetwork 36.67
315 TestKicCustomSubnet 34.46
316 TestKicStaticIP 37.59
317 TestMainNoArgs 0.05
318 TestMinikubeProfile 70.76
321 TestMountStart/serial/StartWithMountFirst 6.58
322 TestMountStart/serial/VerifyMountFirst 0.28
323 TestMountStart/serial/StartWithMountSecond 6.6
324 TestMountStart/serial/VerifyMountSecond 0.28
325 TestMountStart/serial/DeleteFirst 1.71
326 TestMountStart/serial/VerifyMountPostDelete 0.28
327 TestMountStart/serial/Stop 1.3
328 TestMountStart/serial/RestartStopped 7.83
329 TestMountStart/serial/VerifyMountPostStop 0.29
332 TestMultiNode/serial/FreshStart2Nodes 138.65
333 TestMultiNode/serial/DeployApp2Nodes 4.97
334 TestMultiNode/serial/PingHostFrom2Pods 0.99
335 TestMultiNode/serial/AddNode 57.06
336 TestMultiNode/serial/MultiNodeLabels 0.08
337 TestMultiNode/serial/ProfileList 0.71
338 TestMultiNode/serial/CopyFile 10.7
339 TestMultiNode/serial/StopNode 2.43
340 TestMultiNode/serial/StartAfterStop 8.71
341 TestMultiNode/serial/RestartKeepsNodes 76.38
342 TestMultiNode/serial/DeleteNode 5.67
343 TestMultiNode/serial/StopMultiNode 24.1
344 TestMultiNode/serial/RestartMultiNode 51.49
345 TestMultiNode/serial/ValidateNameConflict 35.2
350 TestPreload 118.11
352 TestScheduledStopUnix 110.23
355 TestInsufficientStorage 10.7
356 TestRunningBinaryUpgrade 303.27
359 TestMissingContainerUpgrade 108.71
361 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
362 TestNoKubernetes/serial/StartWithK8s 48.61
363 TestNoKubernetes/serial/StartWithStopK8s 114.33
364 TestNoKubernetes/serial/Start 8.35
365 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
366 TestNoKubernetes/serial/VerifyK8sNotRunning 0.29
367 TestNoKubernetes/serial/ProfileList 1.04
368 TestNoKubernetes/serial/Stop 1.31
369 TestNoKubernetes/serial/StartNoArgs 7.01
370 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.27
371 TestStoppedBinaryUpgrade/Setup 1.27
372 TestStoppedBinaryUpgrade/Upgrade 303.34
373 TestStoppedBinaryUpgrade/MinikubeLogs 1.78
382 TestPause/serial/Start 81.08
383 TestPause/serial/SecondStartNoReconfiguration 28.87
x
+
TestDownloadOnly/v1.28.0/json-events (6.7s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-813428 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-813428 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (6.701681633s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (6.70s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1212 00:10:51.234131  490954 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1212 00:10:51.234213  490954 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22101-487723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-813428
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-813428: exit status 85 (89.098401ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-813428 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-813428 │ jenkins │ v1.37.0 │ 12 Dec 25 00:10 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 00:10:44
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 00:10:44.575330  490959 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:10:44.575452  490959 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:10:44.575462  490959 out.go:374] Setting ErrFile to fd 2...
	I1212 00:10:44.575467  490959 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:10:44.575721  490959 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-487723/.minikube/bin
	W1212 00:10:44.575855  490959 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22101-487723/.minikube/config/config.json: open /home/jenkins/minikube-integration/22101-487723/.minikube/config/config.json: no such file or directory
	I1212 00:10:44.576261  490959 out.go:368] Setting JSON to true
	I1212 00:10:44.577089  490959 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":10390,"bootTime":1765487855,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1212 00:10:44.577157  490959 start.go:143] virtualization:  
	I1212 00:10:44.583886  490959 out.go:99] [download-only-813428] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1212 00:10:44.584067  490959 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/22101-487723/.minikube/cache/preloaded-tarball: no such file or directory
	I1212 00:10:44.584130  490959 notify.go:221] Checking for updates...
	I1212 00:10:44.587441  490959 out.go:171] MINIKUBE_LOCATION=22101
	I1212 00:10:44.591015  490959 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:10:44.594282  490959 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22101-487723/kubeconfig
	I1212 00:10:44.597510  490959 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-487723/.minikube
	I1212 00:10:44.600747  490959 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1212 00:10:44.606900  490959 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1212 00:10:44.607183  490959 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 00:10:44.636520  490959 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 00:10:44.636678  490959 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:10:44.701034  490959 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-12 00:10:44.691514816 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 00:10:44.701133  490959 docker.go:319] overlay module found
	I1212 00:10:44.704254  490959 out.go:99] Using the docker driver based on user configuration
	I1212 00:10:44.704295  490959 start.go:309] selected driver: docker
	I1212 00:10:44.704305  490959 start.go:927] validating driver "docker" against <nil>
	I1212 00:10:44.704407  490959 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:10:44.765496  490959 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-12 00:10:44.754609655 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 00:10:44.765674  490959 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1212 00:10:44.765984  490959 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1212 00:10:44.766144  490959 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1212 00:10:44.769390  490959 out.go:171] Using Docker driver with root privileges
	I1212 00:10:44.772530  490959 cni.go:84] Creating CNI manager for ""
	I1212 00:10:44.772617  490959 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 00:10:44.772631  490959 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1212 00:10:44.772725  490959 start.go:353] cluster config:
	{Name:download-only-813428 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-813428 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:10:44.775872  490959 out.go:99] Starting "download-only-813428" primary control-plane node in "download-only-813428" cluster
	I1212 00:10:44.775904  490959 cache.go:134] Beginning downloading kic base image for docker with crio
	I1212 00:10:44.778959  490959 out.go:99] Pulling base image v0.0.48-1765275396-22083 ...
	I1212 00:10:44.779059  490959 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1212 00:10:44.779149  490959 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1212 00:10:44.800970  490959 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1212 00:10:44.800994  490959 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f to local cache
	I1212 00:10:44.801144  490959 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local cache directory
	I1212 00:10:44.801255  490959 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f to local cache
	I1212 00:10:44.836880  490959 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1212 00:10:44.836922  490959 cache.go:65] Caching tarball of preloaded images
	I1212 00:10:44.837100  490959 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1212 00:10:44.840575  490959 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1212 00:10:44.840614  490959 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1212 00:10:44.922579  490959 preload.go:295] Got checksum from GCS API "e092595ade89dbfc477bd4cd6b9c633b"
	I1212 00:10:44.922763  490959 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e092595ade89dbfc477bd4cd6b9c633b -> /home/jenkins/minikube-integration/22101-487723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-813428 host does not exist
	  To start a cluster, run: "minikube start -p download-only-813428"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-813428
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/json-events (4.98s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-539419 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-539419 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.981351817s)
--- PASS: TestDownloadOnly/v1.34.2/json-events (4.98s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/preload-exists
I1212 00:10:56.655232  490954 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
I1212 00:10:56.655272  490954 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22101-487723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-539419
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-539419: exit status 85 (86.733467ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-813428 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-813428 │ jenkins │ v1.37.0 │ 12 Dec 25 00:10 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 12 Dec 25 00:10 UTC │ 12 Dec 25 00:10 UTC │
	│ delete  │ -p download-only-813428                                                                                                                                                   │ download-only-813428 │ jenkins │ v1.37.0 │ 12 Dec 25 00:10 UTC │ 12 Dec 25 00:10 UTC │
	│ start   │ -o=json --download-only -p download-only-539419 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-539419 │ jenkins │ v1.37.0 │ 12 Dec 25 00:10 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 00:10:51
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 00:10:51.715740  491157 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:10:51.715919  491157 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:10:51.715952  491157 out.go:374] Setting ErrFile to fd 2...
	I1212 00:10:51.715972  491157 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:10:51.716227  491157 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-487723/.minikube/bin
	I1212 00:10:51.716648  491157 out.go:368] Setting JSON to true
	I1212 00:10:51.717482  491157 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":10397,"bootTime":1765487855,"procs":145,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1212 00:10:51.717582  491157 start.go:143] virtualization:  
	I1212 00:10:51.721059  491157 out.go:99] [download-only-539419] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 00:10:51.721357  491157 notify.go:221] Checking for updates...
	I1212 00:10:51.724403  491157 out.go:171] MINIKUBE_LOCATION=22101
	I1212 00:10:51.728342  491157 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:10:51.731304  491157 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22101-487723/kubeconfig
	I1212 00:10:51.734369  491157 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-487723/.minikube
	I1212 00:10:51.737312  491157 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1212 00:10:51.742973  491157 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1212 00:10:51.743258  491157 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 00:10:51.765619  491157 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 00:10:51.765742  491157 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:10:51.825738  491157 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:48 SystemTime:2025-12-12 00:10:51.816708547 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 00:10:51.825847  491157 docker.go:319] overlay module found
	I1212 00:10:51.828675  491157 out.go:99] Using the docker driver based on user configuration
	I1212 00:10:51.828712  491157 start.go:309] selected driver: docker
	I1212 00:10:51.828719  491157 start.go:927] validating driver "docker" against <nil>
	I1212 00:10:51.828833  491157 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:10:51.883515  491157 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:48 SystemTime:2025-12-12 00:10:51.873788582 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 00:10:51.883671  491157 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1212 00:10:51.883932  491157 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1212 00:10:51.884091  491157 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1212 00:10:51.887177  491157 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-539419 host does not exist
	  To start a cluster, run: "minikube start -p download-only-539419"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.2/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.2/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-539419
--- PASS: TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/json-events (6.64s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-510166 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-510166 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (6.64330125s)
--- PASS: TestDownloadOnly/v1.35.0-beta.0/json-events (6.64s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/preload-exists
I1212 00:11:03.737173  490954 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
I1212 00:11:03.737212  490954 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22101-487723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-510166
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-510166: exit status 85 (83.349288ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                       ARGS                                                                                       │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-813428 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio        │ download-only-813428 │ jenkins │ v1.37.0 │ 12 Dec 25 00:10 UTC │                     │
	│ delete  │ --all                                                                                                                                                                            │ minikube             │ jenkins │ v1.37.0 │ 12 Dec 25 00:10 UTC │ 12 Dec 25 00:10 UTC │
	│ delete  │ -p download-only-813428                                                                                                                                                          │ download-only-813428 │ jenkins │ v1.37.0 │ 12 Dec 25 00:10 UTC │ 12 Dec 25 00:10 UTC │
	│ start   │ -o=json --download-only -p download-only-539419 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio        │ download-only-539419 │ jenkins │ v1.37.0 │ 12 Dec 25 00:10 UTC │                     │
	│ delete  │ --all                                                                                                                                                                            │ minikube             │ jenkins │ v1.37.0 │ 12 Dec 25 00:10 UTC │ 12 Dec 25 00:10 UTC │
	│ delete  │ -p download-only-539419                                                                                                                                                          │ download-only-539419 │ jenkins │ v1.37.0 │ 12 Dec 25 00:10 UTC │ 12 Dec 25 00:10 UTC │
	│ start   │ -o=json --download-only -p download-only-510166 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-510166 │ jenkins │ v1.37.0 │ 12 Dec 25 00:10 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 00:10:57
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 00:10:57.135399  491354 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:10:57.135515  491354 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:10:57.135526  491354 out.go:374] Setting ErrFile to fd 2...
	I1212 00:10:57.135531  491354 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:10:57.135787  491354 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-487723/.minikube/bin
	I1212 00:10:57.136199  491354 out.go:368] Setting JSON to true
	I1212 00:10:57.137009  491354 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":10403,"bootTime":1765487855,"procs":145,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1212 00:10:57.137078  491354 start.go:143] virtualization:  
	I1212 00:10:57.140549  491354 out.go:99] [download-only-510166] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 00:10:57.140805  491354 notify.go:221] Checking for updates...
	I1212 00:10:57.143640  491354 out.go:171] MINIKUBE_LOCATION=22101
	I1212 00:10:57.147034  491354 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:10:57.150063  491354 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22101-487723/kubeconfig
	I1212 00:10:57.153171  491354 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-487723/.minikube
	I1212 00:10:57.156041  491354 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1212 00:10:57.161833  491354 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1212 00:10:57.162119  491354 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 00:10:57.196467  491354 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 00:10:57.196619  491354 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:10:57.259737  491354 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-12 00:10:57.242326287 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 00:10:57.259881  491354 docker.go:319] overlay module found
	I1212 00:10:57.262828  491354 out.go:99] Using the docker driver based on user configuration
	I1212 00:10:57.262873  491354 start.go:309] selected driver: docker
	I1212 00:10:57.262885  491354 start.go:927] validating driver "docker" against <nil>
	I1212 00:10:57.263001  491354 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:10:57.315431  491354 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-12 00:10:57.305755654 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 00:10:57.315592  491354 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1212 00:10:57.315881  491354 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1212 00:10:57.316039  491354 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1212 00:10:57.319167  491354 out.go:171] Using Docker driver with root privileges
	I1212 00:10:57.321980  491354 cni.go:84] Creating CNI manager for ""
	I1212 00:10:57.322056  491354 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 00:10:57.322069  491354 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1212 00:10:57.322155  491354 start.go:353] cluster config:
	{Name:download-only-510166 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:download-only-510166 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:10:57.325108  491354 out.go:99] Starting "download-only-510166" primary control-plane node in "download-only-510166" cluster
	I1212 00:10:57.325141  491354 cache.go:134] Beginning downloading kic base image for docker with crio
	I1212 00:10:57.328029  491354 out.go:99] Pulling base image v0.0.48-1765275396-22083 ...
	I1212 00:10:57.328085  491354 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1212 00:10:57.328262  491354 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1212 00:10:57.346733  491354 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1212 00:10:57.346760  491354 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f to local cache
	I1212 00:10:57.346865  491354 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local cache directory
	I1212 00:10:57.346886  491354 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local cache directory, skipping pull
	I1212 00:10:57.346895  491354 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in cache, skipping pull
	I1212 00:10:57.346910  491354 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f as a tarball
	I1212 00:10:57.389226  491354 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1212 00:10:57.389258  491354 cache.go:65] Caching tarball of preloaded images
	I1212 00:10:57.389442  491354 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1212 00:10:57.392457  491354 out.go:99] Downloading Kubernetes v1.35.0-beta.0 preload ...
	I1212 00:10:57.392488  491354 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1212 00:10:57.491309  491354 preload.go:295] Got checksum from GCS API "e7da2fb676059c00535073e4a61150f1"
	I1212 00:10:57.491359  491354 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e7da2fb676059c00535073e4a61150f1 -> /home/jenkins/minikube-integration/22101-487723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-510166 host does not exist
	  To start a cluster, run: "minikube start -p download-only-510166"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-510166
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.6s)

                                                
                                                
=== RUN   TestBinaryMirror
I1212 00:11:05.032295  490954 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-824111 --alsologtostderr --binary-mirror http://127.0.0.1:35743 --driver=docker  --container-runtime=crio
helpers_test.go:176: Cleaning up "binary-mirror-824111" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-824111
--- PASS: TestBinaryMirror (0.60s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-199484
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-199484: exit status 85 (73.827244ms)

                                                
                                                
-- stdout --
	* Profile "addons-199484" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-199484"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-199484
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-199484: exit status 85 (77.120329ms)

                                                
                                                
-- stdout --
	* Profile "addons-199484" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-199484"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (146.94s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-199484 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-199484 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m26.94291843s)
--- PASS: TestAddons/Setup (146.94s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-199484 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-199484 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.89s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-199484 create -f testdata/busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-199484 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [bcd6e1dc-6dc8-422c-946c-fcbf92362207] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [bcd6e1dc-6dc8-422c-946c-fcbf92362207] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.003868399s
addons_test.go:696: (dbg) Run:  kubectl --context addons-199484 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-199484 describe sa gcp-auth-test
addons_test.go:722: (dbg) Run:  kubectl --context addons-199484 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:746: (dbg) Run:  kubectl --context addons-199484 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.89s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.67s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-199484
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-199484: (12.395138808s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-199484
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-199484
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-199484
--- PASS: TestAddons/StoppedEnableDisable (12.67s)

                                                
                                    
x
+
TestCertOptions (36.55s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-221873 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-221873 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (33.720658905s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-221873 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-221873 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-221873 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-221873" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-221873
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-221873: (2.088516644s)
--- PASS: TestCertOptions (36.55s)

                                                
                                    
x
+
TestCertExpiration (246.88s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-964515 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-964515 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (42.982171621s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-964515 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-964515 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (21.053333521s)
helpers_test.go:176: Cleaning up "cert-expiration-964515" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-964515
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-964515: (2.841219977s)
--- PASS: TestCertExpiration (246.88s)

                                                
                                    
x
+
TestForceSystemdFlag (39.43s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-272786 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-272786 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (36.191246952s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-272786 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:176: Cleaning up "force-systemd-flag-272786" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-272786
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-272786: (2.863630679s)
--- PASS: TestForceSystemdFlag (39.43s)

                                                
                                    
x
+
TestForceSystemdEnv (41.64s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-668101 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-668101 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (38.798542847s)
helpers_test.go:176: Cleaning up "force-systemd-env-668101" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-668101
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-668101: (2.842766756s)
--- PASS: TestForceSystemdEnv (41.64s)

                                                
                                    
x
+
TestErrorSpam/setup (33.3s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-256054 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-256054 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-256054 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-256054 --driver=docker  --container-runtime=crio: (33.301806641s)
--- PASS: TestErrorSpam/setup (33.30s)

                                                
                                    
x
+
TestErrorSpam/start (0.79s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-256054 --log_dir /tmp/nospam-256054 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-256054 --log_dir /tmp/nospam-256054 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-256054 --log_dir /tmp/nospam-256054 start --dry-run
--- PASS: TestErrorSpam/start (0.79s)

                                                
                                    
x
+
TestErrorSpam/status (1.05s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-256054 --log_dir /tmp/nospam-256054 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-256054 --log_dir /tmp/nospam-256054 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-256054 --log_dir /tmp/nospam-256054 status
--- PASS: TestErrorSpam/status (1.05s)

                                                
                                    
x
+
TestErrorSpam/pause (6.69s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-256054 --log_dir /tmp/nospam-256054 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-256054 --log_dir /tmp/nospam-256054 pause: exit status 80 (1.908235341s)

                                                
                                                
-- stdout --
	* Pausing node nospam-256054 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T00:17:29Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-256054 --log_dir /tmp/nospam-256054 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-256054 --log_dir /tmp/nospam-256054 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-256054 --log_dir /tmp/nospam-256054 pause: exit status 80 (2.371207611s)

                                                
                                                
-- stdout --
	* Pausing node nospam-256054 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T00:17:31Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-256054 --log_dir /tmp/nospam-256054 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-256054 --log_dir /tmp/nospam-256054 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-256054 --log_dir /tmp/nospam-256054 pause: exit status 80 (2.404018592s)

                                                
                                                
-- stdout --
	* Pausing node nospam-256054 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T00:17:34Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-256054 --log_dir /tmp/nospam-256054 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (6.69s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.58s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-256054 --log_dir /tmp/nospam-256054 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-256054 --log_dir /tmp/nospam-256054 unpause: exit status 80 (2.112896269s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-256054 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T00:17:36Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-256054 --log_dir /tmp/nospam-256054 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-256054 --log_dir /tmp/nospam-256054 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-256054 --log_dir /tmp/nospam-256054 unpause: exit status 80 (1.829030226s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-256054 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T00:17:38Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-256054 --log_dir /tmp/nospam-256054 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-256054 --log_dir /tmp/nospam-256054 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-256054 --log_dir /tmp/nospam-256054 unpause: exit status 80 (1.632832422s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-256054 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T00:17:39Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-256054 --log_dir /tmp/nospam-256054 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.58s)

                                                
                                    
x
+
TestErrorSpam/stop (1.53s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-256054 --log_dir /tmp/nospam-256054 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-256054 --log_dir /tmp/nospam-256054 stop: (1.323266756s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-256054 --log_dir /tmp/nospam-256054 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-256054 --log_dir /tmp/nospam-256054 stop
--- PASS: TestErrorSpam/stop (1.53s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22101-487723/.minikube/files/etc/test/nested/copy/490954/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (78.38s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-921447 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1212 00:18:33.594735  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/addons-199484/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:18:33.601931  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/addons-199484/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:18:33.614183  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/addons-199484/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:18:33.635705  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/addons-199484/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:18:33.677219  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/addons-199484/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:18:33.758708  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/addons-199484/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:18:33.920339  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/addons-199484/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:18:34.242068  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/addons-199484/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:18:34.884123  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/addons-199484/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:18:36.165568  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/addons-199484/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:18:38.727026  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/addons-199484/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:18:43.849160  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/addons-199484/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:18:54.090562  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/addons-199484/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-921447 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m18.382234217s)
--- PASS: TestFunctional/serial/StartWithProxy (78.38s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (25.24s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1212 00:19:04.157337  490954 config.go:182] Loaded profile config "functional-921447": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-921447 --alsologtostderr -v=8
E1212 00:19:14.572222  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/addons-199484/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-921447 --alsologtostderr -v=8: (25.237857122s)
functional_test.go:678: soft start took 25.23837716s for "functional-921447" cluster.
I1212 00:19:29.395481  490954 config.go:182] Loaded profile config "functional-921447": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/SoftStart (25.24s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-921447 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.59s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-921447 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-921447 cache add registry.k8s.io/pause:3.1: (1.237152593s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-921447 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-921447 cache add registry.k8s.io/pause:3.3: (1.200687795s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-921447 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-921447 cache add registry.k8s.io/pause:latest: (1.148572888s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.59s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-921447 /tmp/TestFunctionalserialCacheCmdcacheadd_local2055869947/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-921447 cache add minikube-local-cache-test:functional-921447
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-921447 cache delete minikube-local-cache-test:functional-921447
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-921447
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-921447 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.86s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-921447 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-921447 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-921447 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (291.393941ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-921447 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-921447 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.86s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-921447 kubectl -- --context functional-921447 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-921447 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (31.23s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-921447 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1212 00:19:55.534853  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/addons-199484/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-921447 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (31.227691843s)
functional_test.go:776: restart took 31.227791426s for "functional-921447" cluster.
I1212 00:20:08.309409  490954 config.go:182] Loaded profile config "functional-921447": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/ExtraConfig (31.23s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-921447 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.46s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-921447 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-921447 logs: (1.46084092s)
--- PASS: TestFunctional/serial/LogsCmd (1.46s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.47s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-921447 logs --file /tmp/TestFunctionalserialLogsFileCmd2119223255/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-921447 logs --file /tmp/TestFunctionalserialLogsFileCmd2119223255/001/logs.txt: (1.472545669s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.47s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.41s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-921447 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-921447
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-921447: exit status 115 (416.207628ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31097 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-921447 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.41s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-921447 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-921447 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-921447 config get cpus: exit status 14 (86.588696ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-921447 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-921447 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-921447 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-921447 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-921447 config get cpus: exit status 14 (62.238494ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-921447 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-921447 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 516318: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.14s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-921447 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-921447 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (286.727347ms)

                                                
                                                
-- stdout --
	* [functional-921447] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22101
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22101-487723/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-487723/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 00:20:48.029587  515682 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:20:48.029786  515682 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:20:48.029798  515682 out.go:374] Setting ErrFile to fd 2...
	I1212 00:20:48.029804  515682 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:20:48.030110  515682 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-487723/.minikube/bin
	I1212 00:20:48.030520  515682 out.go:368] Setting JSON to false
	I1212 00:20:48.031538  515682 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":10993,"bootTime":1765487855,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1212 00:20:48.031614  515682 start.go:143] virtualization:  
	I1212 00:20:48.036730  515682 out.go:179] * [functional-921447] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 00:20:48.039811  515682 out.go:179]   - MINIKUBE_LOCATION=22101
	I1212 00:20:48.040013  515682 notify.go:221] Checking for updates...
	I1212 00:20:48.045845  515682 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:20:48.048800  515682 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22101-487723/kubeconfig
	I1212 00:20:48.051773  515682 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-487723/.minikube
	I1212 00:20:48.054783  515682 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 00:20:48.057574  515682 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:20:48.060859  515682 config.go:182] Loaded profile config "functional-921447": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:20:48.061690  515682 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 00:20:48.118739  515682 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 00:20:48.118863  515682 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:20:48.219053  515682 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-12 00:20:48.204242993 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 00:20:48.219174  515682 docker.go:319] overlay module found
	I1212 00:20:48.222263  515682 out.go:179] * Using the docker driver based on existing profile
	I1212 00:20:48.225051  515682 start.go:309] selected driver: docker
	I1212 00:20:48.225072  515682 start.go:927] validating driver "docker" against &{Name:functional-921447 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-921447 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:20:48.225263  515682 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:20:48.228721  515682 out.go:203] 
	W1212 00:20:48.231467  515682 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1212 00:20:48.234438  515682 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-921447 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-921447 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-921447 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (262.736811ms)

                                                
                                                
-- stdout --
	* [functional-921447] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22101
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22101-487723/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-487723/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 00:20:47.771520  515600 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:20:47.773973  515600 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:20:47.773992  515600 out.go:374] Setting ErrFile to fd 2...
	I1212 00:20:47.774000  515600 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:20:47.774417  515600 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-487723/.minikube/bin
	I1212 00:20:47.775893  515600 out.go:368] Setting JSON to false
	I1212 00:20:47.779122  515600 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":10993,"bootTime":1765487855,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1212 00:20:47.779499  515600 start.go:143] virtualization:  
	I1212 00:20:47.783045  515600 out.go:179] * [functional-921447] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1212 00:20:47.787042  515600 out.go:179]   - MINIKUBE_LOCATION=22101
	I1212 00:20:47.787090  515600 notify.go:221] Checking for updates...
	I1212 00:20:47.792943  515600 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:20:47.795956  515600 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22101-487723/kubeconfig
	I1212 00:20:47.799178  515600 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-487723/.minikube
	I1212 00:20:47.802206  515600 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 00:20:47.805266  515600 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:20:47.808584  515600 config.go:182] Loaded profile config "functional-921447": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:20:47.809145  515600 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 00:20:47.852136  515600 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 00:20:47.852267  515600 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:20:47.928921  515600 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-12 00:20:47.916843185 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 00:20:47.929033  515600 docker.go:319] overlay module found
	I1212 00:20:47.932251  515600 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1212 00:20:47.935125  515600 start.go:309] selected driver: docker
	I1212 00:20:47.935162  515600 start.go:927] validating driver "docker" against &{Name:functional-921447 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-921447 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:20:47.935264  515600 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:20:47.938810  515600 out.go:203] 
	W1212 00:20:47.941638  515600 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1212 00:20:47.944472  515600 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-921447 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-921447 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-921447 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-921447 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-921447 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-7d85dfc575-q7bl9" [c08df8f4-abf9-4cd3-9ed1-b3ebe339816b] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-7d85dfc575-q7bl9" [c08df8f4-abf9-4cd3-9ed1-b3ebe339816b] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.003521826s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-arm64 -p functional-921447 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:31717
functional_test.go:1680: http://192.168.49.2:31717: success! body:
Request served by hello-node-connect-7d85dfc575-q7bl9

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:31717
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.59s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-921447 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-921447 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (21.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [c83d0a24-6769-4f9c-b346-37569eec9749] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003893234s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-921447 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-921447 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-921447 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-921447 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [5a3f6402-a038-4dad-b595-9f2a4216fdde] Pending
helpers_test.go:353: "sp-pod" [5a3f6402-a038-4dad-b595-9f2a4216fdde] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [5a3f6402-a038-4dad-b595-9f2a4216fdde] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.004343157s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-921447 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-921447 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-921447 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [30524d62-d1ba-4dcf-b1d1-02c450fb9f0b] Pending
helpers_test.go:353: "sp-pod" [30524d62-d1ba-4dcf-b1d1-02c450fb9f0b] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003277455s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-921447 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (21.94s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-921447 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-921447 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-921447 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-921447 ssh -n functional-921447 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-921447 cp functional-921447:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd350177750/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-921447 ssh -n functional-921447 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-921447 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-921447 ssh -n functional-921447 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.43s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/490954/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-921447 ssh "sudo cat /etc/test/nested/copy/490954/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/490954.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-921447 ssh "sudo cat /etc/ssl/certs/490954.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/490954.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-921447 ssh "sudo cat /usr/share/ca-certificates/490954.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-921447 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/4909542.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-921447 ssh "sudo cat /etc/ssl/certs/4909542.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/4909542.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-921447 ssh "sudo cat /usr/share/ca-certificates/4909542.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-921447 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.19s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-921447 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-921447 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-921447 ssh "sudo systemctl is-active docker": exit status 1 (371.538171ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-921447 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-921447 ssh "sudo systemctl is-active containerd": exit status 1 (435.450025ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-921447 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-921447 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-921447 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 513366: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-921447 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-921447 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-921447 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:353: "nginx-svc" [75adbf1a-7d16-4e7f-ba81-2d3f064ed69e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx-svc" [75adbf1a-7d16-4e7f-ba81-2d3f064ed69e] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.003313405s
I1212 00:20:27.063989  490954 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.46s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-921447 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.99.222.93 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-921447 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-921447 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-921447 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-75c85bcc94-l8smz" [677bf13c-0298-49ac-b411-db60c468d296] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-75c85bcc94-l8smz" [677bf13c-0298-49ac-b411-db60c468d296] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.003646726s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.23s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "353.383178ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "55.822284ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "371.915484ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "52.869193ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-921447 /tmp/TestFunctionalparallelMountCmdany-port3974887333/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765498841387371034" to /tmp/TestFunctionalparallelMountCmdany-port3974887333/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765498841387371034" to /tmp/TestFunctionalparallelMountCmdany-port3974887333/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765498841387371034" to /tmp/TestFunctionalparallelMountCmdany-port3974887333/001/test-1765498841387371034
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-921447 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-921447 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (353.653358ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1212 00:20:41.741276  490954 retry.go:31] will retry after 615.223487ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-921447 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-921447 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 12 00:20 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 12 00:20 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 12 00:20 test-1765498841387371034
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-921447 ssh cat /mount-9p/test-1765498841387371034
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-921447 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [1eee3bd2-0441-4d91-abcd-287a9c629eb6] Pending
helpers_test.go:353: "busybox-mount" [1eee3bd2-0441-4d91-abcd-287a9c629eb6] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [1eee3bd2-0441-4d91-abcd-287a9c629eb6] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [1eee3bd2-0441-4d91-abcd-287a9c629eb6] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.00360171s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-921447 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-921447 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-921447 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-921447 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-921447 /tmp/TestFunctionalparallelMountCmdany-port3974887333/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-921447 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-921447 service list -o json
functional_test.go:1504: Took "561.144725ms" to run "out/minikube-linux-arm64 -p functional-921447 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-921447 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:31996
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-921447 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-921447 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31996
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-921447 /tmp/TestFunctionalparallelMountCmdspecific-port925684433/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-921447 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-921447 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (438.410019ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1212 00:20:49.245021  490954 retry.go:31] will retry after 472.099652ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-921447 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-921447 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-921447 /tmp/TestFunctionalparallelMountCmdspecific-port925684433/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-921447 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-921447 ssh "sudo umount -f /mount-9p": exit status 1 (280.071727ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-921447 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-921447 /tmp/TestFunctionalparallelMountCmdspecific-port925684433/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.20s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-921447 /tmp/TestFunctionalparallelMountCmdVerifyCleanup348015517/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-921447 /tmp/TestFunctionalparallelMountCmdVerifyCleanup348015517/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-921447 /tmp/TestFunctionalparallelMountCmdVerifyCleanup348015517/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-921447 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-921447 ssh "findmnt -T" /mount1: exit status 1 (699.554469ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1212 00:20:51.712041  490954 retry.go:31] will retry after 615.08374ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-921447 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-921447 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-921447 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-921447 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-921447 /tmp/TestFunctionalparallelMountCmdVerifyCleanup348015517/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-921447 /tmp/TestFunctionalparallelMountCmdVerifyCleanup348015517/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-921447 /tmp/TestFunctionalparallelMountCmdVerifyCleanup348015517/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.40s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-921447 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-921447 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-921447 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-921447 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.2
registry.k8s.io/kube-proxy:v1.34.2
registry.k8s.io/kube-controller-manager:v1.34.2
registry.k8s.io/kube-apiserver:v1.34.2
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
public.ecr.aws/nginx/nginx:alpine
localhost/minikube-local-cache-test:functional-921447
localhost/kicbase/echo-server:functional-921447
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-921447 image ls --format short --alsologtostderr:
I1212 00:21:04.560060  518437 out.go:360] Setting OutFile to fd 1 ...
I1212 00:21:04.560183  518437 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 00:21:04.560188  518437 out.go:374] Setting ErrFile to fd 2...
I1212 00:21:04.560193  518437 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 00:21:04.560780  518437 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-487723/.minikube/bin
I1212 00:21:04.561409  518437 config.go:182] Loaded profile config "functional-921447": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1212 00:21:04.561660  518437 config.go:182] Loaded profile config "functional-921447": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1212 00:21:04.562177  518437 cli_runner.go:164] Run: docker container inspect functional-921447 --format={{.State.Status}}
I1212 00:21:04.584681  518437 ssh_runner.go:195] Run: systemctl --version
I1212 00:21:04.584734  518437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-921447
I1212 00:21:04.609586  518437 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/functional-921447/id_rsa Username:docker}
I1212 00:21:04.717829  518437 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-921447 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-921447 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/pause                   │ 3.10.1             │ d7b100cd9a77b │ 520kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 3d18732f8686c │ 487kB  │
│ registry.k8s.io/pause                   │ latest             │ 8cb2091f603e7 │ 246kB  │
│ docker.io/kicbase/echo-server           │ latest             │ ce2d2cda2d858 │ 4.79MB │
│ localhost/kicbase/echo-server           │ functional-921447  │ ce2d2cda2d858 │ 4.79MB │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 1611cd07b61d5 │ 3.77MB │
│ public.ecr.aws/nginx/nginx              │ alpine             │ 10afed3caf3ee │ 55.1MB │
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ 2c5f0dedd21c2 │ 60.9MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.2            │ b178af3d91f80 │ 84.8MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.2            │ 1b34917560f09 │ 72.6MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.2            │ 4f982e73e768a │ 51.6MB │
│ registry.k8s.io/pause                   │ 3.1                │ 8057e0500773a │ 529kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ b1a8c6f707935 │ 111MB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ ba04bb24b9575 │ 29MB   │
│ localhost/minikube-local-cache-test     │ functional-921447  │ 1becebb0f0f05 │ 3.33kB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 138784d87c9c5 │ 73.2MB │
│ registry.k8s.io/kube-proxy              │ v1.34.2            │ 94bff1bec29fd │ 75.9MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-921447 image ls --format table --alsologtostderr:
I1212 00:21:05.115246  518605 out.go:360] Setting OutFile to fd 1 ...
I1212 00:21:05.115493  518605 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 00:21:05.115528  518605 out.go:374] Setting ErrFile to fd 2...
I1212 00:21:05.115548  518605 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 00:21:05.115852  518605 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-487723/.minikube/bin
I1212 00:21:05.116782  518605 config.go:182] Loaded profile config "functional-921447": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1212 00:21:05.116952  518605 config.go:182] Loaded profile config "functional-921447": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1212 00:21:05.117575  518605 cli_runner.go:164] Run: docker container inspect functional-921447 --format={{.State.Status}}
I1212 00:21:05.148754  518605 ssh_runner.go:195] Run: systemctl --version
I1212 00:21:05.148812  518605 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-921447
I1212 00:21:05.193264  518605 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/functional-921447/id_rsa Username:docker}
I1212 00:21:05.307049  518605 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-921447 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-921447 image ls --format json --alsologtostderr:
[{"id":"138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"73195387"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d3183
58822e09a"],"repoTags":[],"size":"42263767"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7","repoDigests":["registry.k8s.io/kube-apiserver@sha256:9a94f333d6fe202d804910534ef052b2cfa650982cdcbe48e92339c8d314dd84","registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.2"],"size":"84753391"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0
ca8cd854c549f2f13330e2723173d88f2de70f"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"519884"},{"id":"10afed3caf3eed1b711b8fa0a9600a7b488a45653a15a598a47ac570c1204cc4","repoDigests":["public.ecr.aws/nginx/nginx@sha256:2faa7e87b6fbce823070978247970cea2ad90b1936e84eeae1bd2680b03c168d","public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"55077248"},{"id":"1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:4b3abd4d4543ac8451f97e9771aa0a29a9958e51ac02fe44900b4a224031df89","registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.2"],"size":"72629077"},{"id":"94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786","repoDigests":["registry.k8s.io/kube-proxy@sha256:20a31b16a001e3e4db71a17ba8effc4b145a3afa2
086e844ab40dc5baa5b8d12","registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.2"],"size":"75941783"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b","docker.io/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c
0a7e98a4d8ac52623fc5b","localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-921447"],"size":"4789170"},{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"111333938"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"2c5
f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"60857170"},{"id":"4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949","repoDigests":["registry.k8s.io/kube-scheduler@sha256:3eff58b308cdc6c65cf030333090e14cc77bea4ed4ea9a92d212a0babc924ffe","registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.2"],"size":"51592021"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","rep
oDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"1becebb0f0f05f244402b7484d9f61be9163cdda2efcdbb0d20e9775ec431c34","repoDigests":["localhost/minikube-local-cache-test@sha256:452cf63c058e53c1a22f413c66eb282f18db780820afa097930a49ceee6e1872"],"repoTags":["localhost/minikube-local-cache-test:functional-921447"],"size":"3330"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-921447 image ls --format json --alsologtostderr:
I1212 00:21:04.832174  518516 out.go:360] Setting OutFile to fd 1 ...
I1212 00:21:04.832418  518516 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 00:21:04.832450  518516 out.go:374] Setting ErrFile to fd 2...
I1212 00:21:04.832469  518516 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 00:21:04.832754  518516 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-487723/.minikube/bin
I1212 00:21:04.833418  518516 config.go:182] Loaded profile config "functional-921447": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1212 00:21:04.833580  518516 config.go:182] Loaded profile config "functional-921447": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1212 00:21:04.834144  518516 cli_runner.go:164] Run: docker container inspect functional-921447 --format={{.State.Status}}
I1212 00:21:04.868088  518516 ssh_runner.go:195] Run: systemctl --version
I1212 00:21:04.868177  518516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-921447
I1212 00:21:04.890544  518516 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/functional-921447/id_rsa Username:docker}
I1212 00:21:05.002066  518516 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-921447 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-921447 image ls --format yaml --alsologtostderr:
- id: 94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786
repoDigests:
- registry.k8s.io/kube-proxy@sha256:20a31b16a001e3e4db71a17ba8effc4b145a3afa2086e844ab40dc5baa5b8d12
- registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5
repoTags:
- registry.k8s.io/kube-proxy:v1.34.2
size: "75941783"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "60857170"
- id: 1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:4b3abd4d4543ac8451f97e9771aa0a29a9958e51ac02fe44900b4a224031df89
- registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.2
size: "72629077"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:9a94f333d6fe202d804910534ef052b2cfa650982cdcbe48e92339c8d314dd84
- registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.2
size: "84753391"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 1becebb0f0f05f244402b7484d9f61be9163cdda2efcdbb0d20e9775ec431c34
repoDigests:
- localhost/minikube-local-cache-test@sha256:452cf63c058e53c1a22f413c66eb282f18db780820afa097930a49ceee6e1872
repoTags:
- localhost/minikube-local-cache-test:functional-921447
size: "3330"
- id: 4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:3eff58b308cdc6c65cf030333090e14cc77bea4ed4ea9a92d212a0babc924ffe
- registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.2
size: "51592021"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b
- docker.io/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b
- localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-921447
size: "4789170"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: 10afed3caf3eed1b711b8fa0a9600a7b488a45653a15a598a47ac570c1204cc4
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:2faa7e87b6fbce823070978247970cea2ad90b1936e84eeae1bd2680b03c168d
- public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "55077248"
- id: 138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "73195387"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-921447 image ls --format yaml --alsologtostderr:
I1212 00:21:04.547682  518441 out.go:360] Setting OutFile to fd 1 ...
I1212 00:21:04.547845  518441 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 00:21:04.547874  518441 out.go:374] Setting ErrFile to fd 2...
I1212 00:21:04.547894  518441 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 00:21:04.548164  518441 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-487723/.minikube/bin
I1212 00:21:04.548864  518441 config.go:182] Loaded profile config "functional-921447": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1212 00:21:04.549028  518441 config.go:182] Loaded profile config "functional-921447": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1212 00:21:04.549585  518441 cli_runner.go:164] Run: docker container inspect functional-921447 --format={{.State.Status}}
I1212 00:21:04.569473  518441 ssh_runner.go:195] Run: systemctl --version
I1212 00:21:04.569557  518441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-921447
I1212 00:21:04.593367  518441 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/functional-921447/id_rsa Username:docker}
I1212 00:21:04.705416  518441 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-921447 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-921447 ssh pgrep buildkitd: exit status 1 (396.169653ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-921447 image build -t localhost/my-image:functional-921447 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-921447 image build -t localhost/my-image:functional-921447 testdata/build --alsologtostderr: (3.431653227s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-921447 image build -t localhost/my-image:functional-921447 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 4d126e3e519
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-921447
--> 23812bf5a8a
Successfully tagged localhost/my-image:functional-921447
23812bf5a8ac764fa35b1c4dcb74167f5c829b73d9d01dd491d7de95c642c014
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-921447 image build -t localhost/my-image:functional-921447 testdata/build --alsologtostderr:
I1212 00:21:05.228056  518624 out.go:360] Setting OutFile to fd 1 ...
I1212 00:21:05.231099  518624 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 00:21:05.231149  518624 out.go:374] Setting ErrFile to fd 2...
I1212 00:21:05.231170  518624 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 00:21:05.231506  518624 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-487723/.minikube/bin
I1212 00:21:05.232239  518624 config.go:182] Loaded profile config "functional-921447": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1212 00:21:05.233061  518624 config.go:182] Loaded profile config "functional-921447": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1212 00:21:05.233691  518624 cli_runner.go:164] Run: docker container inspect functional-921447 --format={{.State.Status}}
I1212 00:21:05.254375  518624 ssh_runner.go:195] Run: systemctl --version
I1212 00:21:05.254426  518624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-921447
I1212 00:21:05.272765  518624 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/functional-921447/id_rsa Username:docker}
I1212 00:21:05.381681  518624 build_images.go:162] Building image from path: /tmp/build.1151569089.tar
I1212 00:21:05.381767  518624 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1212 00:21:05.389911  518624 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1151569089.tar
I1212 00:21:05.393724  518624 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1151569089.tar: stat -c "%s %y" /var/lib/minikube/build/build.1151569089.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1151569089.tar': No such file or directory
I1212 00:21:05.393755  518624 ssh_runner.go:362] scp /tmp/build.1151569089.tar --> /var/lib/minikube/build/build.1151569089.tar (3072 bytes)
I1212 00:21:05.413886  518624 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1151569089
I1212 00:21:05.422491  518624 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1151569089 -xf /var/lib/minikube/build/build.1151569089.tar
I1212 00:21:05.431025  518624 crio.go:315] Building image: /var/lib/minikube/build/build.1151569089
I1212 00:21:05.431116  518624 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-921447 /var/lib/minikube/build/build.1151569089 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1212 00:21:08.551352  518624 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-921447 /var/lib/minikube/build/build.1151569089 --cgroup-manager=cgroupfs: (3.120209412s)
I1212 00:21:08.551422  518624 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1151569089
I1212 00:21:08.559150  518624 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1151569089.tar
I1212 00:21:08.566956  518624 build_images.go:218] Built localhost/my-image:functional-921447 from /tmp/build.1151569089.tar
I1212 00:21:08.566988  518624 build_images.go:134] succeeded building to: functional-921447
I1212 00:21:08.566993  518624 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-921447 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-921447
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-921447 image load --daemon kicbase/echo-server:functional-921447 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-arm64 -p functional-921447 image load --daemon kicbase/echo-server:functional-921447 --alsologtostderr: (3.128542784s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-921447 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-921447 image load --daemon kicbase/echo-server:functional-921447 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-921447 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-921447
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-921447 image load --daemon kicbase/echo-server:functional-921447 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-921447 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-921447 image save kicbase/echo-server:functional-921447 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-921447 image rm kicbase/echo-server:functional-921447 --alsologtostderr
2025/12/12 00:21:01 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-921447 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-921447 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-921447 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-921447 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-921447 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-921447 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-921447
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-921447 image save --daemon kicbase/echo-server:functional-921447 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-921447
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.57s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-921447
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-921447
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-921447
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22101-487723/.minikube/files/etc/test/nested/copy/490954/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (3.53s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-035643 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-035643 cache add registry.k8s.io/pause:3.1: (1.199235509s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-035643 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-035643 cache add registry.k8s.io/pause:3.3: (1.200428613s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-035643 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-035643 cache add registry.k8s.io/pause:latest: (1.133589604s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (3.53s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-035643 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialCach2705707520/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-035643 cache add minikube-local-cache-test:functional-035643
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-035643 cache delete minikube-local-cache-test:functional-035643
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-035643
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-035643 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.81s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-035643 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-035643 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-035643 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (287.157027ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-035643 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-035643 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.81s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (0.96s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-035643 logs
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (0.96s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (0.98s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-035643 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs1595467266/001/logs.txt
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (0.98s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.43s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-035643 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-035643 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-035643 config get cpus: exit status 14 (74.3792ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-035643 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-035643 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-035643 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-035643 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-035643 config get cpus: exit status 14 (62.000333ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.43s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.44s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-035643 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-035643 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (193.941202ms)

                                                
                                                
-- stdout --
	* [functional-035643] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22101
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22101-487723/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-487723/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 00:50:35.351774  548203 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:50:35.351961  548203 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:50:35.351991  548203 out.go:374] Setting ErrFile to fd 2...
	I1212 00:50:35.352010  548203 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:50:35.352378  548203 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-487723/.minikube/bin
	I1212 00:50:35.353231  548203 out.go:368] Setting JSON to false
	I1212 00:50:35.354079  548203 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":12781,"bootTime":1765487855,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1212 00:50:35.354180  548203 start.go:143] virtualization:  
	I1212 00:50:35.357291  548203 out.go:179] * [functional-035643] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 00:50:35.361170  548203 out.go:179]   - MINIKUBE_LOCATION=22101
	I1212 00:50:35.361226  548203 notify.go:221] Checking for updates...
	I1212 00:50:35.367129  548203 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:50:35.370177  548203 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22101-487723/kubeconfig
	I1212 00:50:35.373059  548203 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-487723/.minikube
	I1212 00:50:35.375997  548203 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 00:50:35.378845  548203 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:50:35.382144  548203 config.go:182] Loaded profile config "functional-035643": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 00:50:35.382804  548203 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 00:50:35.414804  548203 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 00:50:35.414984  548203 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:50:35.480907  548203 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 00:50:35.466817049 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 00:50:35.481003  548203 docker.go:319] overlay module found
	I1212 00:50:35.483960  548203 out.go:179] * Using the docker driver based on existing profile
	I1212 00:50:35.486702  548203 start.go:309] selected driver: docker
	I1212 00:50:35.486717  548203 start.go:927] validating driver "docker" against &{Name:functional-035643 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-035643 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:50:35.486800  548203 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:50:35.490107  548203 out.go:203] 
	W1212 00:50:35.493001  548203 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1212 00:50:35.495839  548203 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-035643 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.44s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-035643 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-035643 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (195.647752ms)

                                                
                                                
-- stdout --
	* [functional-035643] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22101
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22101-487723/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-487723/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 00:50:35.167094  548154 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:50:35.167320  548154 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:50:35.167333  548154 out.go:374] Setting ErrFile to fd 2...
	I1212 00:50:35.167339  548154 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:50:35.167763  548154 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-487723/.minikube/bin
	I1212 00:50:35.168277  548154 out.go:368] Setting JSON to false
	I1212 00:50:35.169229  548154 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":12781,"bootTime":1765487855,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1212 00:50:35.169303  548154 start.go:143] virtualization:  
	I1212 00:50:35.172679  548154 out.go:179] * [functional-035643] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1212 00:50:35.176556  548154 out.go:179]   - MINIKUBE_LOCATION=22101
	I1212 00:50:35.176619  548154 notify.go:221] Checking for updates...
	I1212 00:50:35.182489  548154 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:50:35.185508  548154 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22101-487723/kubeconfig
	I1212 00:50:35.188463  548154 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-487723/.minikube
	I1212 00:50:35.191483  548154 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 00:50:35.194396  548154 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:50:35.197844  548154 config.go:182] Loaded profile config "functional-035643": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 00:50:35.198483  548154 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 00:50:35.223685  548154 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 00:50:35.223805  548154 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:50:35.286316  548154 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 00:50:35.276862752 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 00:50:35.286427  548154 docker.go:319] overlay module found
	I1212 00:50:35.289596  548154 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1212 00:50:35.292547  548154 start.go:309] selected driver: docker
	I1212 00:50:35.292572  548154 start.go:927] validating driver "docker" against &{Name:functional-035643 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-035643 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:50:35.292673  548154 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:50:35.296222  548154 out.go:203] 
	W1212 00:50:35.299136  548154 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1212 00:50:35.301847  548154 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-035643 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-035643 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.72s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-035643 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-035643 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.72s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (2.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-035643 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-035643 ssh -n functional-035643 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-035643 cp functional-035643:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp1376821364/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-035643 ssh -n functional-035643 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-035643 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-035643 ssh -n functional-035643 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (2.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/490954/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-035643 ssh "sudo cat /etc/test/nested/copy/490954/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.7s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/490954.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-035643 ssh "sudo cat /etc/ssl/certs/490954.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/490954.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-035643 ssh "sudo cat /usr/share/ca-certificates/490954.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-035643 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/4909542.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-035643 ssh "sudo cat /etc/ssl/certs/4909542.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/4909542.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-035643 ssh "sudo cat /usr/share/ca-certificates/4909542.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-035643 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.70s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.56s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-035643 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-035643 ssh "sudo systemctl is-active docker": exit status 1 (284.321166ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-035643 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-035643 ssh "sudo systemctl is-active containerd": exit status 1 (270.505799ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.56s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.29s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-035643 tunnel --alsologtostderr]
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.1s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-035643 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: exit status 103
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.10s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "353.451229ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "54.500569ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "346.625266ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "64.39608ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.21s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-035643 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3276714616/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-035643 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-035643 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-035643 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3276714616/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-035643 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-035643 ssh "sudo umount -f /mount-9p": exit status 1 (294.239636ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-035643 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-035643 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3276714616/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.21s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-035643 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2477175357/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-035643 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2477175357/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-035643 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2477175357/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-035643 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-035643 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-035643 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-035643 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-035643 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2477175357/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-035643 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2477175357/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-035643 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2477175357/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-035643 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.52s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-035643 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.52s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-035643 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-035643 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-beta.0
registry.k8s.io/kube-proxy:v1.35.0-beta.0
registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
registry.k8s.io/kube-apiserver:v1.35.0-beta.0
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.13.1
localhost/minikube-local-cache-test:functional-035643
localhost/kicbase/echo-server:functional-035643
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-035643 image ls --format short --alsologtostderr:
I1212 00:50:47.967501  550351 out.go:360] Setting OutFile to fd 1 ...
I1212 00:50:47.967669  550351 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 00:50:47.967681  550351 out.go:374] Setting ErrFile to fd 2...
I1212 00:50:47.967688  550351 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 00:50:47.967949  550351 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-487723/.minikube/bin
I1212 00:50:47.968677  550351 config.go:182] Loaded profile config "functional-035643": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1212 00:50:47.968841  550351 config.go:182] Loaded profile config "functional-035643": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1212 00:50:47.969375  550351 cli_runner.go:164] Run: docker container inspect functional-035643 --format={{.State.Status}}
I1212 00:50:47.986994  550351 ssh_runner.go:195] Run: systemctl --version
I1212 00:50:47.987059  550351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
I1212 00:50:48.006386  550351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/functional-035643/id_rsa Username:docker}
I1212 00:50:48.113887  550351 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-035643 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-035643 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ localhost/minikube-local-cache-test     │ functional-035643  │ 1becebb0f0f05 │ 3.33kB │
│ registry.k8s.io/pause                   │ 3.10.1             │ d7b100cd9a77b │ 520kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 3d18732f8686c │ 487kB  │
│ registry.k8s.io/pause                   │ latest             │ 8cb2091f603e7 │ 246kB  │
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ 2c5f0dedd21c2 │ 60.9MB │
│ localhost/kicbase/echo-server           │ functional-035643  │ ce2d2cda2d858 │ 4.79MB │
│ registry.k8s.io/kube-apiserver          │ v1.35.0-beta.0     │ ccd634d9bcc36 │ 85MB   │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ b1a8c6f707935 │ 111MB  │
│ localhost/my-image                      │ functional-035643  │ 80f1f9be203fd │ 1.64MB │
│ registry.k8s.io/coredns/coredns         │ v1.13.1            │ e08f4d9d2e6ed │ 74.5MB │
│ registry.k8s.io/kube-controller-manager │ v1.35.0-beta.0     │ 68b5f775f1876 │ 72.2MB │
│ registry.k8s.io/kube-proxy              │ v1.35.0-beta.0     │ 404c2e1286177 │ 74.1MB │
│ registry.k8s.io/kube-scheduler          │ v1.35.0-beta.0     │ 16378741539f1 │ 49.8MB │
│ registry.k8s.io/pause                   │ 3.1                │ 8057e0500773a │ 529kB  │
│ gcr.io/k8s-minikube/busybox             │ latest             │ 71a676dd070f4 │ 1.63MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ ba04bb24b9575 │ 29MB   │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-035643 image ls --format table --alsologtostderr:
I1212 00:50:52.445313  550847 out.go:360] Setting OutFile to fd 1 ...
I1212 00:50:52.445646  550847 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 00:50:52.445679  550847 out.go:374] Setting ErrFile to fd 2...
I1212 00:50:52.445698  550847 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 00:50:52.446008  550847 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-487723/.minikube/bin
I1212 00:50:52.446724  550847 config.go:182] Loaded profile config "functional-035643": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1212 00:50:52.446894  550847 config.go:182] Loaded profile config "functional-035643": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1212 00:50:52.447395  550847 cli_runner.go:164] Run: docker container inspect functional-035643 --format={{.State.Status}}
I1212 00:50:52.464480  550847 ssh_runner.go:195] Run: systemctl --version
I1212 00:50:52.464542  550847 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
I1212 00:50:52.481207  550847 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/functional-035643/id_rsa Username:docker}
I1212 00:50:52.585452  550847 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-035643 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-035643 image ls --format json --alsologtostderr:
[{"id":"71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1634527"},{"id":"1becebb0f0f05f244402b7484d9f61be9163cdda2efcdbb0d20e9775ec431c34","repoDigests":["localhost/minikube-local-cache-test@sha256:452cf63c058e53c1a22f413c66eb282f18db780820afa097930a49ceee6e1872"],"repoTags":["localhost/minikube-local-cache-test:functional-035643"],"size":"3330"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c0
2799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"60857170"},{"id":"68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d","registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"],"size":"72170325"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"cc4c410383fc6e090d797a0814698956e3c37312f5fa2c3b894b1b3844113739","repoDigests":["docker.io/library/982429ea6b47ce153564c245fec99a362ac2224376777845cd2
b3ea8a0312e90-tmp@sha256:3d758e54ef637eb1dd6c727fafedf3adb396896bc6948637313b6f5cdcf630d8"],"repoTags":[],"size":"1638179"},{"id":"80f1f9be203fd5e8ca2f3bf8402011707bcaf7614f937b41dfad943e8e17dc2d","repoDigests":["localhost/my-image@sha256:e568737112935a480af49de2cb5714a3975c8ee56dfd100e5732cf5e28a45ab6"],"repoTags":["localhost/my-image:functional-035643"],"size":"1640791"},{"id":"e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6","registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"74491780"},{"id":"ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4","repoDigests":["registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58","registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296b
f30909ab98d18d1c8cdb6d1"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-beta.0"],"size":"84949999"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a"],"repoTags":["localhost/kicbase/echo-server:functional-035643"],"size":"4788229"},{"id":"404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904","repoDigests":["registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478","registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737b
f3e21495b00401840b07e942938f3bbbba8a2a"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0-beta.0"],"size":"74106775"},{"id":"16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b","repoDigests":["registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6","registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-beta.0"],"size":"49822549"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"],"rep
oTags":["registry.k8s.io/pause:3.10.1"],"size":"519884"},{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"111333938"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-035643 image ls --format json --alsologtostderr:
I1212 00:50:52.223193  550809 out.go:360] Setting OutFile to fd 1 ...
I1212 00:50:52.223357  550809 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 00:50:52.223387  550809 out.go:374] Setting ErrFile to fd 2...
I1212 00:50:52.223407  550809 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 00:50:52.223658  550809 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-487723/.minikube/bin
I1212 00:50:52.224286  550809 config.go:182] Loaded profile config "functional-035643": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1212 00:50:52.224443  550809 config.go:182] Loaded profile config "functional-035643": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1212 00:50:52.225029  550809 cli_runner.go:164] Run: docker container inspect functional-035643 --format={{.State.Status}}
I1212 00:50:52.242100  550809 ssh_runner.go:195] Run: systemctl --version
I1212 00:50:52.242158  550809 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
I1212 00:50:52.259663  550809 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/functional-035643/id_rsa Username:docker}
I1212 00:50:52.361010  550809 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-035643 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-035643 image ls --format yaml --alsologtostderr:
- id: 404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904
repoDigests:
- registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478
- registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-beta.0
size: "74106775"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
repoTags:
- localhost/kicbase/echo-server:functional-035643
size: "4788229"
- id: e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6
- registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "74491780"
- id: 2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "60857170"
- id: ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58
- registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-beta.0
size: "84949999"
- id: 68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d
- registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
size: "72170325"
- id: 16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6
- registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-beta.0
size: "49822549"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 1becebb0f0f05f244402b7484d9f61be9163cdda2efcdbb0d20e9775ec431c34
repoDigests:
- localhost/minikube-local-cache-test@sha256:452cf63c058e53c1a22f413c66eb282f18db780820afa097930a49ceee6e1872
repoTags:
- localhost/minikube-local-cache-test:functional-035643
size: "3330"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-035643 image ls --format yaml --alsologtostderr:
I1212 00:50:48.220675  550393 out.go:360] Setting OutFile to fd 1 ...
I1212 00:50:48.220801  550393 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 00:50:48.220810  550393 out.go:374] Setting ErrFile to fd 2...
I1212 00:50:48.220816  550393 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 00:50:48.221069  550393 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-487723/.minikube/bin
I1212 00:50:48.221703  550393 config.go:182] Loaded profile config "functional-035643": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1212 00:50:48.221831  550393 config.go:182] Loaded profile config "functional-035643": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1212 00:50:48.222341  550393 cli_runner.go:164] Run: docker container inspect functional-035643 --format={{.State.Status}}
I1212 00:50:48.242138  550393 ssh_runner.go:195] Run: systemctl --version
I1212 00:50:48.242195  550393 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
I1212 00:50:48.266074  550393 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/functional-035643/id_rsa Username:docker}
I1212 00:50:48.369494  550393 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (3.77s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-035643 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-035643 ssh pgrep buildkitd: exit status 1 (269.514972ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-035643 image build -t localhost/my-image:functional-035643 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-035643 image build -t localhost/my-image:functional-035643 testdata/build --alsologtostderr: (3.264190446s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-035643 image build -t localhost/my-image:functional-035643 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> cc4c410383f
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-035643
--> 80f1f9be203
Successfully tagged localhost/my-image:functional-035643
80f1f9be203fd5e8ca2f3bf8402011707bcaf7614f937b41dfad943e8e17dc2d
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-035643 image build -t localhost/my-image:functional-035643 testdata/build --alsologtostderr:
I1212 00:50:48.722841  550492 out.go:360] Setting OutFile to fd 1 ...
I1212 00:50:48.723016  550492 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 00:50:48.723028  550492 out.go:374] Setting ErrFile to fd 2...
I1212 00:50:48.723034  550492 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 00:50:48.723307  550492 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-487723/.minikube/bin
I1212 00:50:48.723951  550492 config.go:182] Loaded profile config "functional-035643": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1212 00:50:48.724614  550492 config.go:182] Loaded profile config "functional-035643": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1212 00:50:48.725188  550492 cli_runner.go:164] Run: docker container inspect functional-035643 --format={{.State.Status}}
I1212 00:50:48.742441  550492 ssh_runner.go:195] Run: systemctl --version
I1212 00:50:48.742505  550492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035643
I1212 00:50:48.758842  550492 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/functional-035643/id_rsa Username:docker}
I1212 00:50:48.861345  550492 build_images.go:162] Building image from path: /tmp/build.2205726426.tar
I1212 00:50:48.861415  550492 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1212 00:50:48.869113  550492 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2205726426.tar
I1212 00:50:48.872698  550492 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2205726426.tar: stat -c "%s %y" /var/lib/minikube/build/build.2205726426.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2205726426.tar': No such file or directory
I1212 00:50:48.872730  550492 ssh_runner.go:362] scp /tmp/build.2205726426.tar --> /var/lib/minikube/build/build.2205726426.tar (3072 bytes)
I1212 00:50:48.890594  550492 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2205726426
I1212 00:50:48.898314  550492 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2205726426 -xf /var/lib/minikube/build/build.2205726426.tar
I1212 00:50:48.906158  550492 crio.go:315] Building image: /var/lib/minikube/build/build.2205726426
I1212 00:50:48.906231  550492 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-035643 /var/lib/minikube/build/build.2205726426 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1212 00:50:51.909467  550492 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-035643 /var/lib/minikube/build/build.2205726426 --cgroup-manager=cgroupfs: (3.003211751s)
I1212 00:50:51.909540  550492 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2205726426
I1212 00:50:51.917750  550492 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2205726426.tar
I1212 00:50:51.927122  550492 build_images.go:218] Built localhost/my-image:functional-035643 from /tmp/build.2205726426.tar
I1212 00:50:51.927157  550492 build_images.go:134] succeeded building to: functional-035643
I1212 00:50:51.927163  550492 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-035643 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (3.77s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-035643
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-035643 image load --daemon kicbase/echo-server:functional-035643 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-035643 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.84s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-035643 image load --daemon kicbase/echo-server:functional-035643 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-035643 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.84s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-035643
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-035643 image load --daemon kicbase/echo-server:functional-035643 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-035643 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.37s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-035643 image save kicbase/echo-server:functional-035643 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.37s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.56s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-035643 image rm kicbase/echo-server:functional-035643 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-035643 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.56s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.76s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-035643 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-035643 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.76s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.43s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-035643
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-035643 image save --daemon kicbase/echo-server:functional-035643 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-035643
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.43s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-035643 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-035643 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-035643 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-035643
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-035643
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-035643
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (185.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-034811 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1212 00:53:18.669709  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:53:18.678383  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:53:18.689702  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:53:18.711080  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:53:18.752464  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:53:18.833809  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:53:18.995346  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:53:19.317048  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:53:19.959126  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:53:21.240615  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:53:23.801926  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:53:28.923338  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:53:33.590812  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/addons-199484/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:53:39.165410  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:53:59.647315  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:54:40.608668  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:55:17.605178  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-921447/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-034811 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (3m4.734875378s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-034811 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (185.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-034811 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-034811 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-034811 kubectl -- rollout status deployment/busybox: (5.17417082s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-034811 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-034811 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-034811 kubectl -- exec busybox-7b57f96db7-4hp67 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-034811 kubectl -- exec busybox-7b57f96db7-g99mm -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-034811 kubectl -- exec busybox-7b57f96db7-klsqz -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-034811 kubectl -- exec busybox-7b57f96db7-4hp67 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-034811 kubectl -- exec busybox-7b57f96db7-g99mm -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-034811 kubectl -- exec busybox-7b57f96db7-klsqz -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-034811 kubectl -- exec busybox-7b57f96db7-4hp67 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-034811 kubectl -- exec busybox-7b57f96db7-g99mm -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-034811 kubectl -- exec busybox-7b57f96db7-klsqz -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-034811 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-034811 kubectl -- exec busybox-7b57f96db7-4hp67 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-034811 kubectl -- exec busybox-7b57f96db7-4hp67 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-034811 kubectl -- exec busybox-7b57f96db7-g99mm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-034811 kubectl -- exec busybox-7b57f96db7-g99mm -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-034811 kubectl -- exec busybox-7b57f96db7-klsqz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-034811 kubectl -- exec busybox-7b57f96db7-klsqz -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (60.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-034811 node add --alsologtostderr -v 5
E1212 00:56:02.530235  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-034811 node add --alsologtostderr -v 5: (59.928734823s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-034811 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-034811 status --alsologtostderr -v 5: (1.037227296s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (60.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-034811 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.068363038s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (20.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-034811 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-034811 status --output json --alsologtostderr -v 5: (1.119597915s)
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-034811 cp testdata/cp-test.txt ha-034811:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-034811 ssh -n ha-034811 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-034811 cp ha-034811:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2722023474/001/cp-test_ha-034811.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-034811 ssh -n ha-034811 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-034811 cp ha-034811:/home/docker/cp-test.txt ha-034811-m02:/home/docker/cp-test_ha-034811_ha-034811-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-034811 ssh -n ha-034811 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-034811 ssh -n ha-034811-m02 "sudo cat /home/docker/cp-test_ha-034811_ha-034811-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-034811 cp ha-034811:/home/docker/cp-test.txt ha-034811-m03:/home/docker/cp-test_ha-034811_ha-034811-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-034811 ssh -n ha-034811 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-034811 ssh -n ha-034811-m03 "sudo cat /home/docker/cp-test_ha-034811_ha-034811-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-034811 cp ha-034811:/home/docker/cp-test.txt ha-034811-m04:/home/docker/cp-test_ha-034811_ha-034811-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-034811 ssh -n ha-034811 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-034811 ssh -n ha-034811-m04 "sudo cat /home/docker/cp-test_ha-034811_ha-034811-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-034811 cp testdata/cp-test.txt ha-034811-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-034811 ssh -n ha-034811-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-034811 cp ha-034811-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2722023474/001/cp-test_ha-034811-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-034811 ssh -n ha-034811-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-034811 cp ha-034811-m02:/home/docker/cp-test.txt ha-034811:/home/docker/cp-test_ha-034811-m02_ha-034811.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-034811 ssh -n ha-034811-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-034811 ssh -n ha-034811 "sudo cat /home/docker/cp-test_ha-034811-m02_ha-034811.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-034811 cp ha-034811-m02:/home/docker/cp-test.txt ha-034811-m03:/home/docker/cp-test_ha-034811-m02_ha-034811-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-034811 ssh -n ha-034811-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-034811 ssh -n ha-034811-m03 "sudo cat /home/docker/cp-test_ha-034811-m02_ha-034811-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-034811 cp ha-034811-m02:/home/docker/cp-test.txt ha-034811-m04:/home/docker/cp-test_ha-034811-m02_ha-034811-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-034811 ssh -n ha-034811-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-034811 ssh -n ha-034811-m04 "sudo cat /home/docker/cp-test_ha-034811-m02_ha-034811-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-034811 cp testdata/cp-test.txt ha-034811-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-034811 ssh -n ha-034811-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-034811 cp ha-034811-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2722023474/001/cp-test_ha-034811-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-034811 ssh -n ha-034811-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-034811 cp ha-034811-m03:/home/docker/cp-test.txt ha-034811:/home/docker/cp-test_ha-034811-m03_ha-034811.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-034811 ssh -n ha-034811-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-034811 ssh -n ha-034811 "sudo cat /home/docker/cp-test_ha-034811-m03_ha-034811.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-034811 cp ha-034811-m03:/home/docker/cp-test.txt ha-034811-m02:/home/docker/cp-test_ha-034811-m03_ha-034811-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-034811 ssh -n ha-034811-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-034811 ssh -n ha-034811-m02 "sudo cat /home/docker/cp-test_ha-034811-m03_ha-034811-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-034811 cp ha-034811-m03:/home/docker/cp-test.txt ha-034811-m04:/home/docker/cp-test_ha-034811-m03_ha-034811-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-034811 ssh -n ha-034811-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-034811 ssh -n ha-034811-m04 "sudo cat /home/docker/cp-test_ha-034811-m03_ha-034811-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-034811 cp testdata/cp-test.txt ha-034811-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-034811 ssh -n ha-034811-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-034811 cp ha-034811-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2722023474/001/cp-test_ha-034811-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-034811 ssh -n ha-034811-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-034811 cp ha-034811-m04:/home/docker/cp-test.txt ha-034811:/home/docker/cp-test_ha-034811-m04_ha-034811.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-034811 ssh -n ha-034811-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-034811 ssh -n ha-034811 "sudo cat /home/docker/cp-test_ha-034811-m04_ha-034811.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-034811 cp ha-034811-m04:/home/docker/cp-test.txt ha-034811-m02:/home/docker/cp-test_ha-034811-m04_ha-034811-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-034811 ssh -n ha-034811-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-034811 ssh -n ha-034811-m02 "sudo cat /home/docker/cp-test_ha-034811-m04_ha-034811-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-034811 cp ha-034811-m04:/home/docker/cp-test.txt ha-034811-m03:/home/docker/cp-test_ha-034811-m04_ha-034811-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-034811 ssh -n ha-034811-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-034811 ssh -n ha-034811-m03 "sudo cat /home/docker/cp-test_ha-034811-m04_ha-034811-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (20.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-034811 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-034811 node stop m02 --alsologtostderr -v 5: (12.085627979s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-034811 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-034811 status --alsologtostderr -v 5: exit status 7 (787.853549ms)

                                                
                                                
-- stdout --
	ha-034811
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-034811-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-034811-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-034811-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 00:57:12.585186  566529 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:57:12.585416  566529 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:57:12.585430  566529 out.go:374] Setting ErrFile to fd 2...
	I1212 00:57:12.585436  566529 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:57:12.585745  566529 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-487723/.minikube/bin
	I1212 00:57:12.586020  566529 out.go:368] Setting JSON to false
	I1212 00:57:12.586064  566529 mustload.go:66] Loading cluster: ha-034811
	I1212 00:57:12.586147  566529 notify.go:221] Checking for updates...
	I1212 00:57:12.586515  566529 config.go:182] Loaded profile config "ha-034811": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 00:57:12.586589  566529 status.go:174] checking status of ha-034811 ...
	I1212 00:57:12.588018  566529 cli_runner.go:164] Run: docker container inspect ha-034811 --format={{.State.Status}}
	I1212 00:57:12.608727  566529 status.go:371] ha-034811 host status = "Running" (err=<nil>)
	I1212 00:57:12.608752  566529 host.go:66] Checking if "ha-034811" exists ...
	I1212 00:57:12.609035  566529 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-034811
	I1212 00:57:12.639130  566529 host.go:66] Checking if "ha-034811" exists ...
	I1212 00:57:12.639470  566529 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 00:57:12.639524  566529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-034811
	I1212 00:57:12.662434  566529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/ha-034811/id_rsa Username:docker}
	I1212 00:57:12.776726  566529 ssh_runner.go:195] Run: systemctl --version
	I1212 00:57:12.787300  566529 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:57:12.802458  566529 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:57:12.860988  566529 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2025-12-12 00:57:12.85167053 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 00:57:12.861554  566529 kubeconfig.go:125] found "ha-034811" server: "https://192.168.49.254:8443"
	I1212 00:57:12.861581  566529 api_server.go:166] Checking apiserver status ...
	I1212 00:57:12.861626  566529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:57:12.874410  566529 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1252/cgroup
	I1212 00:57:12.883170  566529 api_server.go:182] apiserver freezer: "6:freezer:/docker/5465b1dd8fc092cab2df752e1c3f63e8f40a983c21a985615e05901fbec34e3c/crio/crio-d8d14581b7d6f3582b141dfdffb33825eafb38629d7fb4cd347157d42c77b6e3"
	I1212 00:57:12.883253  566529 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/5465b1dd8fc092cab2df752e1c3f63e8f40a983c21a985615e05901fbec34e3c/crio/crio-d8d14581b7d6f3582b141dfdffb33825eafb38629d7fb4cd347157d42c77b6e3/freezer.state
	I1212 00:57:12.890892  566529 api_server.go:204] freezer state: "THAWED"
	I1212 00:57:12.890922  566529 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1212 00:57:12.899180  566529 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1212 00:57:12.899220  566529 status.go:463] ha-034811 apiserver status = Running (err=<nil>)
	I1212 00:57:12.899249  566529 status.go:176] ha-034811 status: &{Name:ha-034811 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1212 00:57:12.899271  566529 status.go:174] checking status of ha-034811-m02 ...
	I1212 00:57:12.899608  566529 cli_runner.go:164] Run: docker container inspect ha-034811-m02 --format={{.State.Status}}
	I1212 00:57:12.917192  566529 status.go:371] ha-034811-m02 host status = "Stopped" (err=<nil>)
	I1212 00:57:12.917217  566529 status.go:384] host is not running, skipping remaining checks
	I1212 00:57:12.917224  566529 status.go:176] ha-034811-m02 status: &{Name:ha-034811-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1212 00:57:12.917245  566529 status.go:174] checking status of ha-034811-m03 ...
	I1212 00:57:12.917560  566529 cli_runner.go:164] Run: docker container inspect ha-034811-m03 --format={{.State.Status}}
	I1212 00:57:12.936265  566529 status.go:371] ha-034811-m03 host status = "Running" (err=<nil>)
	I1212 00:57:12.936290  566529 host.go:66] Checking if "ha-034811-m03" exists ...
	I1212 00:57:12.937648  566529 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-034811-m03
	I1212 00:57:12.955447  566529 host.go:66] Checking if "ha-034811-m03" exists ...
	I1212 00:57:12.955766  566529 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 00:57:12.955820  566529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-034811-m03
	I1212 00:57:12.973389  566529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33198 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/ha-034811-m03/id_rsa Username:docker}
	I1212 00:57:13.086972  566529 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:57:13.101798  566529 kubeconfig.go:125] found "ha-034811" server: "https://192.168.49.254:8443"
	I1212 00:57:13.101824  566529 api_server.go:166] Checking apiserver status ...
	I1212 00:57:13.101866  566529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:57:13.115184  566529 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1201/cgroup
	I1212 00:57:13.123716  566529 api_server.go:182] apiserver freezer: "6:freezer:/docker/f7bd3c617a8e485c0674cd0a752f86fde617f7acc988d7fccc3614ed1552031c/crio/crio-1fc63c00a62bf3488ddb47e2101ae6fb8746ed5b28c19fc71b3bbcdb5d00afc0"
	I1212 00:57:13.123840  566529 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/f7bd3c617a8e485c0674cd0a752f86fde617f7acc988d7fccc3614ed1552031c/crio/crio-1fc63c00a62bf3488ddb47e2101ae6fb8746ed5b28c19fc71b3bbcdb5d00afc0/freezer.state
	I1212 00:57:13.133397  566529 api_server.go:204] freezer state: "THAWED"
	I1212 00:57:13.133431  566529 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1212 00:57:13.142194  566529 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1212 00:57:13.142227  566529 status.go:463] ha-034811-m03 apiserver status = Running (err=<nil>)
	I1212 00:57:13.142236  566529 status.go:176] ha-034811-m03 status: &{Name:ha-034811-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1212 00:57:13.142252  566529 status.go:174] checking status of ha-034811-m04 ...
	I1212 00:57:13.142599  566529 cli_runner.go:164] Run: docker container inspect ha-034811-m04 --format={{.State.Status}}
	I1212 00:57:13.161031  566529 status.go:371] ha-034811-m04 host status = "Running" (err=<nil>)
	I1212 00:57:13.161054  566529 host.go:66] Checking if "ha-034811-m04" exists ...
	I1212 00:57:13.161370  566529 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-034811-m04
	I1212 00:57:13.183068  566529 host.go:66] Checking if "ha-034811-m04" exists ...
	I1212 00:57:13.183397  566529 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 00:57:13.183439  566529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-034811-m04
	I1212 00:57:13.206177  566529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33203 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/ha-034811-m04/id_rsa Username:docker}
	I1212 00:57:13.311855  566529 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:57:13.324770  566529 status.go:176] ha-034811-m04 status: &{Name:ha-034811-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (20.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-034811 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-034811 node start m02 --alsologtostderr -v 5: (18.901874573s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-034811 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-034811 status --alsologtostderr -v 5: (1.338877142s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (20.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.362761599s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (126.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-034811 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-034811 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-034811 stop --alsologtostderr -v 5: (27.256673064s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-034811 start --wait true --alsologtostderr -v 5
E1212 00:58:18.669434  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:58:20.676060  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-921447/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:58:33.590792  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/addons-199484/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:58:46.372036  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-034811 start --wait true --alsologtostderr -v 5: (1m39.162998064s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-034811 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (126.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-034811 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-034811 node delete m03 --alsologtostderr -v 5: (10.58450402s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-034811 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-034811 stop --alsologtostderr -v 5
E1212 01:00:17.604800  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-921447/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-034811 stop --alsologtostderr -v 5: (35.933992934s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-034811 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-034811 status --alsologtostderr -v 5: exit status 7 (116.816327ms)

                                                
                                                
-- stdout --
	ha-034811
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-034811-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-034811-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 01:00:30.874562  578478 out.go:360] Setting OutFile to fd 1 ...
	I1212 01:00:30.874767  578478 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 01:00:30.874798  578478 out.go:374] Setting ErrFile to fd 2...
	I1212 01:00:30.874819  578478 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 01:00:30.875084  578478 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-487723/.minikube/bin
	I1212 01:00:30.875289  578478 out.go:368] Setting JSON to false
	I1212 01:00:30.875351  578478 mustload.go:66] Loading cluster: ha-034811
	I1212 01:00:30.875423  578478 notify.go:221] Checking for updates...
	I1212 01:00:30.876672  578478 config.go:182] Loaded profile config "ha-034811": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 01:00:30.876725  578478 status.go:174] checking status of ha-034811 ...
	I1212 01:00:30.877422  578478 cli_runner.go:164] Run: docker container inspect ha-034811 --format={{.State.Status}}
	I1212 01:00:30.895182  578478 status.go:371] ha-034811 host status = "Stopped" (err=<nil>)
	I1212 01:00:30.895202  578478 status.go:384] host is not running, skipping remaining checks
	I1212 01:00:30.895209  578478 status.go:176] ha-034811 status: &{Name:ha-034811 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1212 01:00:30.895238  578478 status.go:174] checking status of ha-034811-m02 ...
	I1212 01:00:30.895533  578478 cli_runner.go:164] Run: docker container inspect ha-034811-m02 --format={{.State.Status}}
	I1212 01:00:30.916366  578478 status.go:371] ha-034811-m02 host status = "Stopped" (err=<nil>)
	I1212 01:00:30.916438  578478 status.go:384] host is not running, skipping remaining checks
	I1212 01:00:30.916458  578478 status.go:176] ha-034811-m02 status: &{Name:ha-034811-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1212 01:00:30.916477  578478 status.go:174] checking status of ha-034811-m04 ...
	I1212 01:00:30.916787  578478 cli_runner.go:164] Run: docker container inspect ha-034811-m04 --format={{.State.Status}}
	I1212 01:00:30.938633  578478 status.go:371] ha-034811-m04 host status = "Stopped" (err=<nil>)
	I1212 01:00:30.938653  578478 status.go:384] host is not running, skipping remaining checks
	I1212 01:00:30.938659  578478 status.go:176] ha-034811-m04 status: &{Name:ha-034811-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (71.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-034811 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-034811 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m10.541185588s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-034811 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (71.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (81.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-034811 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-034811 node add --control-plane --alsologtostderr -v 5: (1m20.613119366s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-034811 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-034811 status --alsologtostderr -v 5: (1.165699841s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (81.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.096449943s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.10s)

                                                
                                    
x
+
TestJSONOutput/start/Command (81.2s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-042660 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E1212 01:03:18.669596  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:03:33.590825  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/addons-199484/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-042660 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m21.19974369s)
--- PASS: TestJSONOutput/start/Command (81.20s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.85s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-042660 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-042660 --output=json --user=testUser: (5.852323449s)
--- PASS: TestJSONOutput/stop/Command (5.85s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-126307 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-126307 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (94.912749ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d7385186-70b7-4ed2-8859-ffc7bc4c4878","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-126307] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c8f11671-bbfd-41c8-9b3c-d28331314178","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22101"}}
	{"specversion":"1.0","id":"3edb31e3-f10c-46f5-8919-006a23de0247","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"f605e89f-d90e-4e52-818f-e496b8918d47","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22101-487723/kubeconfig"}}
	{"specversion":"1.0","id":"307cc4f5-cd57-4bc3-97b1-8166502180e2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-487723/.minikube"}}
	{"specversion":"1.0","id":"61cc0650-4006-4e19-9974-14c1621b4b95","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"d4eb3487-d61c-4d0d-be47-3174597c13e3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"bb2a06ed-4240-4fd4-9cdf-cb56250d77ac","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-126307" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-126307
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (38.28s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-103957 --network=
E1212 01:05:17.605108  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-921447/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-103957 --network=: (35.961166903s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-103957" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-103957
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-103957: (2.291738466s)
--- PASS: TestKicCustomNetwork/create_custom_network (38.28s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (35.55s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-682274 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-682274 --network=bridge: (33.443821846s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-682274" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-682274
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-682274: (2.074090202s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (35.55s)

                                                
                                    
x
+
TestKicExistingNetwork (36.67s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1212 01:06:05.165660  490954 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1212 01:06:05.183214  490954 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1212 01:06:05.183295  490954 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1212 01:06:05.183320  490954 cli_runner.go:164] Run: docker network inspect existing-network
W1212 01:06:05.200099  490954 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1212 01:06:05.200135  490954 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1212 01:06:05.200150  490954 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1212 01:06:05.200259  490954 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1212 01:06:05.218113  490954 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-987f53aa9676 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:c6:59:9a:7d:dd:1e} reservation:<nil>}
I1212 01:06:05.218482  490954 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400155d0b0}
I1212 01:06:05.218511  490954 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1212 01:06:05.218565  490954 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1212 01:06:05.275207  490954 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-241476 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-241476 --network=existing-network: (34.363894268s)
helpers_test.go:176: Cleaning up "existing-network-241476" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-241476
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-241476: (2.167222011s)
I1212 01:06:41.823177  490954 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (36.67s)

                                                
                                    
x
+
TestKicCustomSubnet (34.46s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-266154 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-266154 --subnet=192.168.60.0/24: (32.194724495s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-266154 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:176: Cleaning up "custom-subnet-266154" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-266154
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-266154: (2.241668377s)
--- PASS: TestKicCustomSubnet (34.46s)

                                                
                                    
x
+
TestKicStaticIP (37.59s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-751251 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-751251 --static-ip=192.168.200.200: (35.279969187s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-751251 ip
helpers_test.go:176: Cleaning up "static-ip-751251" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-751251
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-751251: (2.162621846s)
--- PASS: TestKicStaticIP (37.59s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (70.76s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-206808 --driver=docker  --container-runtime=crio
E1212 01:08:16.666805  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/addons-199484/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:08:18.669572  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-206808 --driver=docker  --container-runtime=crio: (31.362699568s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-209483 --driver=docker  --container-runtime=crio
E1212 01:08:33.592248  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/addons-199484/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-209483 --driver=docker  --container-runtime=crio: (33.46782154s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-206808
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-209483
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:176: Cleaning up "second-209483" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p second-209483
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p second-209483: (2.161497805s)
helpers_test.go:176: Cleaning up "first-206808" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p first-206808
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p first-206808: (2.056554282s)
--- PASS: TestMinikubeProfile (70.76s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.58s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-631986 --memory=3072 --mount-string /tmp/TestMountStartserial1907434093/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-631986 --memory=3072 --mount-string /tmp/TestMountStartserial1907434093/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.583547755s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.58s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-631986 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.6s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-633766 --memory=3072 --mount-string /tmp/TestMountStartserial1907434093/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-633766 --memory=3072 --mount-string /tmp/TestMountStartserial1907434093/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.600758505s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.60s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-633766 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.71s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-631986 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-631986 --alsologtostderr -v=5: (1.70829746s)
--- PASS: TestMountStart/serial/DeleteFirst (1.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-633766 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-633766
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-633766: (1.297244588s)
--- PASS: TestMountStart/serial/Stop (1.30s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.83s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-633766
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-633766: (6.832499568s)
--- PASS: TestMountStart/serial/RestartStopped (7.83s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-633766 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.29s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (138.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-425096 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1212 01:09:41.734220  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:10:17.605001  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-921447/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-425096 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (2m18.119962893s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425096 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (138.65s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-425096 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-425096 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-425096 -- rollout status deployment/busybox: (3.157055624s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-425096 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-425096 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-425096 -- exec busybox-7b57f96db7-lgth5 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-425096 -- exec busybox-7b57f96db7-pxnvb -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-425096 -- exec busybox-7b57f96db7-lgth5 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-425096 -- exec busybox-7b57f96db7-pxnvb -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-425096 -- exec busybox-7b57f96db7-lgth5 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-425096 -- exec busybox-7b57f96db7-pxnvb -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.97s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-425096 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-425096 -- exec busybox-7b57f96db7-lgth5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-425096 -- exec busybox-7b57f96db7-lgth5 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-425096 -- exec busybox-7b57f96db7-pxnvb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-425096 -- exec busybox-7b57f96db7-pxnvb -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.99s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (57.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-425096 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-425096 -v=5 --alsologtostderr: (56.348870986s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425096 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (57.06s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-425096 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.71s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425096 status --output json --alsologtostderr
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425096 cp testdata/cp-test.txt multinode-425096:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425096 ssh -n multinode-425096 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425096 cp multinode-425096:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile539734196/001/cp-test_multinode-425096.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425096 ssh -n multinode-425096 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425096 cp multinode-425096:/home/docker/cp-test.txt multinode-425096-m02:/home/docker/cp-test_multinode-425096_multinode-425096-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425096 ssh -n multinode-425096 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425096 ssh -n multinode-425096-m02 "sudo cat /home/docker/cp-test_multinode-425096_multinode-425096-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425096 cp multinode-425096:/home/docker/cp-test.txt multinode-425096-m03:/home/docker/cp-test_multinode-425096_multinode-425096-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425096 ssh -n multinode-425096 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425096 ssh -n multinode-425096-m03 "sudo cat /home/docker/cp-test_multinode-425096_multinode-425096-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425096 cp testdata/cp-test.txt multinode-425096-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425096 ssh -n multinode-425096-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425096 cp multinode-425096-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile539734196/001/cp-test_multinode-425096-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425096 ssh -n multinode-425096-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425096 cp multinode-425096-m02:/home/docker/cp-test.txt multinode-425096:/home/docker/cp-test_multinode-425096-m02_multinode-425096.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425096 ssh -n multinode-425096-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425096 ssh -n multinode-425096 "sudo cat /home/docker/cp-test_multinode-425096-m02_multinode-425096.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425096 cp multinode-425096-m02:/home/docker/cp-test.txt multinode-425096-m03:/home/docker/cp-test_multinode-425096-m02_multinode-425096-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425096 ssh -n multinode-425096-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425096 ssh -n multinode-425096-m03 "sudo cat /home/docker/cp-test_multinode-425096-m02_multinode-425096-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425096 cp testdata/cp-test.txt multinode-425096-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425096 ssh -n multinode-425096-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425096 cp multinode-425096-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile539734196/001/cp-test_multinode-425096-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425096 ssh -n multinode-425096-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425096 cp multinode-425096-m03:/home/docker/cp-test.txt multinode-425096:/home/docker/cp-test_multinode-425096-m03_multinode-425096.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425096 ssh -n multinode-425096-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425096 ssh -n multinode-425096 "sudo cat /home/docker/cp-test_multinode-425096-m03_multinode-425096.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425096 cp multinode-425096-m03:/home/docker/cp-test.txt multinode-425096-m02:/home/docker/cp-test_multinode-425096-m03_multinode-425096-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425096 ssh -n multinode-425096-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425096 ssh -n multinode-425096-m02 "sudo cat /home/docker/cp-test_multinode-425096-m03_multinode-425096-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.70s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425096 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-425096 node stop m03: (1.323337486s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425096 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-425096 status: exit status 7 (538.315056ms)

                                                
                                                
-- stdout --
	multinode-425096
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-425096-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-425096-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425096 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-425096 status --alsologtostderr: exit status 7 (572.221456ms)

                                                
                                                
-- stdout --
	multinode-425096
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-425096-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-425096-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 01:13:06.855236  628897 out.go:360] Setting OutFile to fd 1 ...
	I1212 01:13:06.855390  628897 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 01:13:06.855413  628897 out.go:374] Setting ErrFile to fd 2...
	I1212 01:13:06.855747  628897 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 01:13:06.856238  628897 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-487723/.minikube/bin
	I1212 01:13:06.856489  628897 out.go:368] Setting JSON to false
	I1212 01:13:06.856547  628897 mustload.go:66] Loading cluster: multinode-425096
	I1212 01:13:06.856623  628897 notify.go:221] Checking for updates...
	I1212 01:13:06.857671  628897 config.go:182] Loaded profile config "multinode-425096": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 01:13:06.857700  628897 status.go:174] checking status of multinode-425096 ...
	I1212 01:13:06.858851  628897 cli_runner.go:164] Run: docker container inspect multinode-425096 --format={{.State.Status}}
	I1212 01:13:06.883297  628897 status.go:371] multinode-425096 host status = "Running" (err=<nil>)
	I1212 01:13:06.883316  628897 host.go:66] Checking if "multinode-425096" exists ...
	I1212 01:13:06.883615  628897 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-425096
	I1212 01:13:06.909118  628897 host.go:66] Checking if "multinode-425096" exists ...
	I1212 01:13:06.909424  628897 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 01:13:06.909465  628897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-425096
	I1212 01:13:06.932510  628897 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33309 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/multinode-425096/id_rsa Username:docker}
	I1212 01:13:07.044227  628897 ssh_runner.go:195] Run: systemctl --version
	I1212 01:13:07.050822  628897 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 01:13:07.064804  628897 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 01:13:07.146902  628897 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-12 01:13:07.134999134 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 01:13:07.147435  628897 kubeconfig.go:125] found "multinode-425096" server: "https://192.168.67.2:8443"
	I1212 01:13:07.147466  628897 api_server.go:166] Checking apiserver status ...
	I1212 01:13:07.147521  628897 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:13:07.159759  628897 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1281/cgroup
	I1212 01:13:07.168162  628897 api_server.go:182] apiserver freezer: "6:freezer:/docker/62dc20aea35013991d8159f6f64d1cd7bd71c739773995560acd308f20745edd/crio/crio-6e8e6e8173092d51518f152a9147450dae77fbb7e653f008d1b8185d4fa65ace"
	I1212 01:13:07.168231  628897 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/62dc20aea35013991d8159f6f64d1cd7bd71c739773995560acd308f20745edd/crio/crio-6e8e6e8173092d51518f152a9147450dae77fbb7e653f008d1b8185d4fa65ace/freezer.state
	I1212 01:13:07.176838  628897 api_server.go:204] freezer state: "THAWED"
	I1212 01:13:07.176866  628897 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1212 01:13:07.187486  628897 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1212 01:13:07.187516  628897 status.go:463] multinode-425096 apiserver status = Running (err=<nil>)
	I1212 01:13:07.187527  628897 status.go:176] multinode-425096 status: &{Name:multinode-425096 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1212 01:13:07.187544  628897 status.go:174] checking status of multinode-425096-m02 ...
	I1212 01:13:07.187860  628897 cli_runner.go:164] Run: docker container inspect multinode-425096-m02 --format={{.State.Status}}
	I1212 01:13:07.206332  628897 status.go:371] multinode-425096-m02 host status = "Running" (err=<nil>)
	I1212 01:13:07.206372  628897 host.go:66] Checking if "multinode-425096-m02" exists ...
	I1212 01:13:07.206831  628897 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-425096-m02
	I1212 01:13:07.224678  628897 host.go:66] Checking if "multinode-425096-m02" exists ...
	I1212 01:13:07.225132  628897 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 01:13:07.225178  628897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-425096-m02
	I1212 01:13:07.243418  628897 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33314 SSHKeyPath:/home/jenkins/minikube-integration/22101-487723/.minikube/machines/multinode-425096-m02/id_rsa Username:docker}
	I1212 01:13:07.347819  628897 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 01:13:07.360957  628897 status.go:176] multinode-425096-m02 status: &{Name:multinode-425096-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1212 01:13:07.360992  628897 status.go:174] checking status of multinode-425096-m03 ...
	I1212 01:13:07.361311  628897 cli_runner.go:164] Run: docker container inspect multinode-425096-m03 --format={{.State.Status}}
	I1212 01:13:07.379190  628897 status.go:371] multinode-425096-m03 host status = "Stopped" (err=<nil>)
	I1212 01:13:07.379212  628897 status.go:384] host is not running, skipping remaining checks
	I1212 01:13:07.379220  628897 status.go:176] multinode-425096-m03 status: &{Name:multinode-425096-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.43s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425096 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-425096 node start m03 -v=5 --alsologtostderr: (7.92676327s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425096 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.71s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (76.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-425096
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-425096
E1212 01:13:18.669657  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:13:33.592203  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/addons-199484/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-425096: (25.045475977s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-425096 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-425096 --wait=true -v=5 --alsologtostderr: (51.214749023s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-425096
--- PASS: TestMultiNode/serial/RestartKeepsNodes (76.38s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425096 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-425096 node delete m03: (4.97070995s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425096 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.67s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425096 stop
E1212 01:15:00.677560  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-921447/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-425096 stop: (23.90571629s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425096 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-425096 status: exit status 7 (103.116751ms)

                                                
                                                
-- stdout --
	multinode-425096
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-425096-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425096 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-425096 status --alsologtostderr: exit status 7 (87.794071ms)

                                                
                                                
-- stdout --
	multinode-425096
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-425096-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 01:15:02.193040  636760 out.go:360] Setting OutFile to fd 1 ...
	I1212 01:15:02.193166  636760 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 01:15:02.193177  636760 out.go:374] Setting ErrFile to fd 2...
	I1212 01:15:02.193184  636760 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 01:15:02.193434  636760 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-487723/.minikube/bin
	I1212 01:15:02.193622  636760 out.go:368] Setting JSON to false
	I1212 01:15:02.193656  636760 mustload.go:66] Loading cluster: multinode-425096
	I1212 01:15:02.193754  636760 notify.go:221] Checking for updates...
	I1212 01:15:02.194069  636760 config.go:182] Loaded profile config "multinode-425096": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 01:15:02.194092  636760 status.go:174] checking status of multinode-425096 ...
	I1212 01:15:02.194655  636760 cli_runner.go:164] Run: docker container inspect multinode-425096 --format={{.State.Status}}
	I1212 01:15:02.215608  636760 status.go:371] multinode-425096 host status = "Stopped" (err=<nil>)
	I1212 01:15:02.215632  636760 status.go:384] host is not running, skipping remaining checks
	I1212 01:15:02.215639  636760 status.go:176] multinode-425096 status: &{Name:multinode-425096 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1212 01:15:02.215663  636760 status.go:174] checking status of multinode-425096-m02 ...
	I1212 01:15:02.215981  636760 cli_runner.go:164] Run: docker container inspect multinode-425096-m02 --format={{.State.Status}}
	I1212 01:15:02.236332  636760 status.go:371] multinode-425096-m02 host status = "Stopped" (err=<nil>)
	I1212 01:15:02.236359  636760 status.go:384] host is not running, skipping remaining checks
	I1212 01:15:02.236366  636760 status.go:176] multinode-425096-m02 status: &{Name:multinode-425096-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.10s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (51.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-425096 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1212 01:15:17.605485  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-921447/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-425096 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (50.784904652s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425096 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (51.49s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (35.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-425096
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-425096-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-425096-m02 --driver=docker  --container-runtime=crio: exit status 14 (97.229351ms)

                                                
                                                
-- stdout --
	* [multinode-425096-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22101
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22101-487723/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-487723/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-425096-m02' is duplicated with machine name 'multinode-425096-m02' in profile 'multinode-425096'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-425096-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-425096-m03 --driver=docker  --container-runtime=crio: (32.564247222s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-425096
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-425096: exit status 80 (368.370707ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-425096 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-425096-m03 already exists in multinode-425096-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-425096-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-425096-m03: (2.12300634s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (35.20s)

                                                
                                    
x
+
TestPreload (118.11s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-513055 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio
preload_test.go:41: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-513055 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio: (58.292648549s)
preload_test.go:49: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-513055 image pull gcr.io/k8s-minikube/busybox
preload_test.go:49: (dbg) Done: out/minikube-linux-arm64 -p test-preload-513055 image pull gcr.io/k8s-minikube/busybox: (2.034007793s)
preload_test.go:55: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-513055
preload_test.go:55: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-513055: (5.828962432s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-513055 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E1212 01:18:18.669579  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-513055 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (49.306553948s)
preload_test.go:68: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-513055 image list
helpers_test.go:176: Cleaning up "test-preload-513055" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-513055
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-513055: (2.402685256s)
--- PASS: TestPreload (118.11s)

                                                
                                    
x
+
TestScheduledStopUnix (110.23s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-859211 --memory=3072 --driver=docker  --container-runtime=crio
E1212 01:18:33.590806  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/addons-199484/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-859211 --memory=3072 --driver=docker  --container-runtime=crio: (33.823794017s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-859211 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1212 01:19:05.276491  650809 out.go:360] Setting OutFile to fd 1 ...
	I1212 01:19:05.276612  650809 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 01:19:05.276623  650809 out.go:374] Setting ErrFile to fd 2...
	I1212 01:19:05.276629  650809 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 01:19:05.276926  650809 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-487723/.minikube/bin
	I1212 01:19:05.277189  650809 out.go:368] Setting JSON to false
	I1212 01:19:05.277306  650809 mustload.go:66] Loading cluster: scheduled-stop-859211
	I1212 01:19:05.277710  650809 config.go:182] Loaded profile config "scheduled-stop-859211": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 01:19:05.277817  650809 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/scheduled-stop-859211/config.json ...
	I1212 01:19:05.278005  650809 mustload.go:66] Loading cluster: scheduled-stop-859211
	I1212 01:19:05.278126  650809 config.go:182] Loaded profile config "scheduled-stop-859211": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-859211 -n scheduled-stop-859211
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-859211 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1212 01:19:05.730142  650900 out.go:360] Setting OutFile to fd 1 ...
	I1212 01:19:05.730371  650900 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 01:19:05.730401  650900 out.go:374] Setting ErrFile to fd 2...
	I1212 01:19:05.730455  650900 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 01:19:05.730951  650900 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-487723/.minikube/bin
	I1212 01:19:05.731326  650900 out.go:368] Setting JSON to false
	I1212 01:19:05.732596  650900 daemonize_unix.go:73] killing process 650834 as it is an old scheduled stop
	I1212 01:19:05.737525  650900 mustload.go:66] Loading cluster: scheduled-stop-859211
	I1212 01:19:05.738296  650900 config.go:182] Loaded profile config "scheduled-stop-859211": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 01:19:05.738413  650900 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/scheduled-stop-859211/config.json ...
	I1212 01:19:05.738646  650900 mustload.go:66] Loading cluster: scheduled-stop-859211
	I1212 01:19:05.738788  650900 config.go:182] Loaded profile config "scheduled-stop-859211": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1212 01:19:05.745274  490954 retry.go:31] will retry after 145.223µs: open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/scheduled-stop-859211/pid: no such file or directory
I1212 01:19:05.745622  490954 retry.go:31] will retry after 108.799µs: open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/scheduled-stop-859211/pid: no such file or directory
I1212 01:19:05.745899  490954 retry.go:31] will retry after 210.151µs: open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/scheduled-stop-859211/pid: no such file or directory
I1212 01:19:05.746966  490954 retry.go:31] will retry after 413.658µs: open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/scheduled-stop-859211/pid: no such file or directory
I1212 01:19:05.748101  490954 retry.go:31] will retry after 473.879µs: open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/scheduled-stop-859211/pid: no such file or directory
I1212 01:19:05.749269  490954 retry.go:31] will retry after 938.12µs: open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/scheduled-stop-859211/pid: no such file or directory
I1212 01:19:05.750362  490954 retry.go:31] will retry after 1.177563ms: open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/scheduled-stop-859211/pid: no such file or directory
I1212 01:19:05.752561  490954 retry.go:31] will retry after 2.314845ms: open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/scheduled-stop-859211/pid: no such file or directory
I1212 01:19:05.755795  490954 retry.go:31] will retry after 1.842002ms: open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/scheduled-stop-859211/pid: no such file or directory
I1212 01:19:05.757943  490954 retry.go:31] will retry after 2.757639ms: open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/scheduled-stop-859211/pid: no such file or directory
I1212 01:19:05.761091  490954 retry.go:31] will retry after 7.435208ms: open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/scheduled-stop-859211/pid: no such file or directory
I1212 01:19:05.769338  490954 retry.go:31] will retry after 10.653914ms: open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/scheduled-stop-859211/pid: no such file or directory
I1212 01:19:05.780583  490954 retry.go:31] will retry after 10.056013ms: open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/scheduled-stop-859211/pid: no such file or directory
I1212 01:19:05.790774  490954 retry.go:31] will retry after 17.12944ms: open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/scheduled-stop-859211/pid: no such file or directory
I1212 01:19:05.808999  490954 retry.go:31] will retry after 36.664348ms: open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/scheduled-stop-859211/pid: no such file or directory
I1212 01:19:05.846222  490954 retry.go:31] will retry after 57.125949ms: open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/scheduled-stop-859211/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-859211 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-859211 -n scheduled-stop-859211
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-859211
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-859211 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1212 01:19:31.743524  651261 out.go:360] Setting OutFile to fd 1 ...
	I1212 01:19:31.743667  651261 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 01:19:31.743690  651261 out.go:374] Setting ErrFile to fd 2...
	I1212 01:19:31.743703  651261 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 01:19:31.744081  651261 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-487723/.minikube/bin
	I1212 01:19:31.744478  651261 out.go:368] Setting JSON to false
	I1212 01:19:31.744637  651261 mustload.go:66] Loading cluster: scheduled-stop-859211
	I1212 01:19:31.745291  651261 config.go:182] Loaded profile config "scheduled-stop-859211": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 01:19:31.745395  651261 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/scheduled-stop-859211/config.json ...
	I1212 01:19:31.746082  651261 mustload.go:66] Loading cluster: scheduled-stop-859211
	I1212 01:19:31.746292  651261 config.go:182] Loaded profile config "scheduled-stop-859211": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-859211
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-859211: exit status 7 (68.600933ms)

                                                
                                                
-- stdout --
	scheduled-stop-859211
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-859211 -n scheduled-stop-859211
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-859211 -n scheduled-stop-859211: exit status 7 (69.575208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-859211" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-859211
E1212 01:20:17.604904  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-921447/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-859211: (4.735433611s)
--- PASS: TestScheduledStopUnix (110.23s)

                                                
                                    
x
+
TestInsufficientStorage (10.7s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-550671 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-550671 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (8.009473696s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"cec64604-cc73-423e-a36f-6faf9a072b52","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-550671] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"26fdb7e9-4e56-4cd6-a6d3-4d0531b9b999","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22101"}}
	{"specversion":"1.0","id":"336265af-2f02-4c10-9bc0-a3b4c36d5b8e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"b8c215b2-1ba3-4333-8158-3a86e314a449","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22101-487723/kubeconfig"}}
	{"specversion":"1.0","id":"23ab6c58-4e79-40ea-bd86-daec7a0bfec4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-487723/.minikube"}}
	{"specversion":"1.0","id":"8b8d426f-ec84-4689-9101-688b22aaeb96","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"a8176100-4ad0-4fd2-9c69-37ba423ba7c3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"c6d24c06-ffb6-48a8-80e6-388b2dd036a5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"1e60dfb7-48e1-4a76-96db-9371bb21bc91","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"6f3a9d82-0106-4cb4-a75b-fae18d66c10d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"35223d0f-897a-4725-a5e7-9a95ed20d866","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"4f260581-0420-47a1-aa43-8ec6083efa39","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-550671\" primary control-plane node in \"insufficient-storage-550671\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"a7c5d92b-10f0-44fe-a626-9474256523e5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1765275396-22083 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"cc18d642-c72f-4368-97d3-d8591883421d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"615fa19c-0776-41ec-aaf7-a942378f0c76","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-550671 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-550671 --output=json --layout=cluster: exit status 7 (295.75725ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-550671","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-550671","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 01:20:29.915040  652974 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-550671" does not appear in /home/jenkins/minikube-integration/22101-487723/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-550671 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-550671 --output=json --layout=cluster: exit status 7 (409.986885ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-550671","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-550671","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 01:20:30.326378  653041 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-550671" does not appear in /home/jenkins/minikube-integration/22101-487723/kubeconfig
	E1212 01:20:30.336668  653041 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/insufficient-storage-550671/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-550671" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-550671
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-550671: (1.981766142s)
--- PASS: TestInsufficientStorage (10.70s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (303.27s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.1821585284 start -p running-upgrade-260319 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.1821585284 start -p running-upgrade-260319 --memory=3072 --vm-driver=docker  --container-runtime=crio: (34.761768077s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-260319 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1212 01:30:17.604997  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-921447/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:31:40.679609  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-921447/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:33:18.669425  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:33:33.591244  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/addons-199484/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-260319 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m25.260685631s)
helpers_test.go:176: Cleaning up "running-upgrade-260319" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-260319
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-260319: (2.125473093s)
--- PASS: TestRunningBinaryUpgrade (303.27s)

                                                
                                    
x
+
TestMissingContainerUpgrade (108.71s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.3060949177 start -p missing-upgrade-812493 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.3060949177 start -p missing-upgrade-812493 --memory=3072 --driver=docker  --container-runtime=crio: (1m1.74492412s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-812493
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-812493
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-812493 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-812493 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (42.803797438s)
helpers_test.go:176: Cleaning up "missing-upgrade-812493" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-812493
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-812493: (2.077782547s)
--- PASS: TestMissingContainerUpgrade (108.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-869533 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-869533 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (89.107752ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-869533] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22101
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22101-487723/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-487723/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (48.61s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-869533 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-869533 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (48.14135554s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-869533 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (48.61s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (114.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-869533 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-869533 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (1m51.740608007s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-869533 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-869533 status -o json: exit status 2 (322.947313ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-869533","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-869533
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-869533: (2.266638422s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (114.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-869533 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1212 01:23:18.668998  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-869533 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (8.348800818s)
--- PASS: TestNoKubernetes/serial/Start (8.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22101-487723/.minikube/cache/linux/arm64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-869533 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-869533 "sudo systemctl is-active --quiet service kubelet": exit status 1 (287.470869ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-869533
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-869533: (1.308591163s)
--- PASS: TestNoKubernetes/serial/Stop (1.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.01s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-869533 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-869533 --driver=docker  --container-runtime=crio: (7.010341903s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.01s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-869533 "sudo systemctl is-active --quiet service kubelet"
E1212 01:23:33.591875  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/addons-199484/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-869533 "sudo systemctl is-active --quiet service kubelet": exit status 1 (271.48445ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.27s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.27s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (303.34s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.2050795551 start -p stopped-upgrade-204630 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.2050795551 start -p stopped-upgrade-204630 --memory=3072 --vm-driver=docker  --container-runtime=crio: (32.789779584s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.2050795551 -p stopped-upgrade-204630 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.2050795551 -p stopped-upgrade-204630 stop: (1.261331503s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-204630 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1212 01:24:56.668260  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/addons-199484/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:25:17.604768  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-921447/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:26:21.736095  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:28:18.669669  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-035643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:28:33.590896  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/addons-199484/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-204630 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m29.28570526s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (303.34s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.78s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-204630
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-204630: (1.780365296s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.78s)

                                                
                                    
x
+
TestPause/serial/Start (81.08s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-249141 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-249141 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m21.084189826s)
--- PASS: TestPause/serial/Start (81.08s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (28.87s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-249141 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1212 01:35:17.604393  490954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-487723/.minikube/profiles/functional-921447/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-249141 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (28.860434657s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (28.87s)

                                                
                                    

Test skip (36/316)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.2/cached-images 0
15 TestDownloadOnly/v1.34.2/binaries 0
16 TestDownloadOnly/v1.34.2/kubectl 0
23 TestDownloadOnly/v1.35.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.35.0-beta.0/binaries 0
25 TestDownloadOnly/v1.35.0-beta.0/kubectl 0
29 TestDownloadOnlyKic 0.44
31 TestOffline 0
42 TestAddons/serial/GCPAuth/RealCredentials 0
49 TestAddons/parallel/Olm 0
56 TestAddons/parallel/AmdGpuDevicePlugin 0
60 TestDockerFlags 0
63 TestDockerEnvContainerd 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
112 TestFunctional/parallel/MySQL 0
116 TestFunctional/parallel/DockerEnv 0
117 TestFunctional/parallel/PodmanEnv 0
130 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0
131 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
132 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0
207 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL 0
211 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv 0
212 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv 0
224 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig 0
225 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
226 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS 0
261 TestGvisorAddon 0
283 TestImageBuild 0
284 TestISOImage 0
348 TestChangeNoneUser 0
351 TestScheduledStopWindows 0
353 TestSkaffold 0
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.44s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-950363 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:176: Cleaning up "download-docker-950363" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-950363
--- SKIP: TestDownloadOnlyKic (0.44s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:761: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1035: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
Copied to clipboard